商业正在发生变化。你会适应还是被抛在后面?
Business is changing. Will you adapt or be left behind?
通过《哈佛商业评论》系列中您需要的见解,加快并加深您对塑造公司未来的主题的理解。每本书都以《哈佛商业评论》对快速发展的问题(区块链、网络安全、人工智能等)的最明智的思考为特色,提供了您的组织当今竞争所需的基础介绍和实际案例研究,并收集了最好的研究、访谈和分析,为其做好准备。明天。
Get up to speed and deepen your understanding of the topics that are shaping your company’s future with the Insights You Need from Harvard Business Review series. Featuring HBR’s smartest thinking on fast-moving issues—blockchain, cybersecurity, AI, and more—each book provides the foundation introduction and practical case studies your organization needs to compete today and collects the best research, interviews, and analysis to get it ready for tomorrow.
您不能忽视这些问题将如何改变商业和社会的面貌。您需要的见解系列将帮助您掌握这些关键想法,并为您和您的公司的未来做好准备。
You can’t afford to ignore how these issues will transform the landscape of business and society. The Insights You Need series will help you grasp these critical ideas—and prepare you and your company for the future.
该系列书籍包括:
Books in the series include:
敏捷
Agile
人工智能
Artificial Intelligence
区块链
Blockchain
气候变化
Climate Change
加密货币
Crypto
客户数据和隐私
Customer Data and Privacy
网络安全
Cybersecurity
工作的未来
The Future of Work
生成式人工智能
Generative AI
全球经济衰退
Global Recession
混合工作场所
Hybrid Workplace
垄断与科技巨头
Monopolies and Tech Giants
多代人的工作场所
Multigenerational Workplace
种族正义
Racial Justice
战略分析
Strategic Analytics
供应链
Supply Chain
网络3
Web3
2023 年科技年
The Year in Tech 2023
2024 年科技年
The Year in Tech 2024
哈佛商业评论出版社
Harvard Business Review Press
马萨诸塞州波士顿
Boston, Massachusetts
HBR 媒体数量销售折扣
HBR Press Quantity Sales Discounts
批量购买《哈佛商业评论》出版社的图书可享受大幅折扣,用于赠送客户礼品、促销和赠品。特别版,包括带有公司徽标的书籍、定制封面、封面印有公司或首席执行官的信件以及现有书籍的摘录,也可以根据特殊需要大量制作。
Harvard Business Review Press titles are available at significant quantity discounts when purchased in bulk for client gifts, sales promotions, and premiums. Special editions, including books with corporate logos, customized covers, and letters from the company or CEO printed in the front matter, as well as excerpts of existing books, can also be created in large quantities for special needs.
有关印刷版和电子书格式的详细信息和折扣信息,请联系booksales@harvardbusiness.org,电话:800-988-0886,或www.hbr.org/bulksales 。
For details and discount information for both print and ebook formats, contact booksales@harvardbusiness.org, tel. 800-988-0886, or www
版权所有 2024 哈佛商学院出版公司
Copyright 2024 Harvard Business School Publishing Corporation
版权所有
All rights reserved
未经出版商事先许可,不得以任何形式或通过任何手段(电子、机械、复印、记录或其他方式)复制、存储或引入检索系统或传播本出版物的任何部分。许可请求应直接发送至permissions@harvardbusiness.org,或邮寄至 Permissions,Harvard Business School Publishing, 60Harvard Way, Boston, Massachusetts 02163。
No part of this publication may be reproduced, stored in or introduced into a retrieval system, or transmitted, in any form, or by any means (electronic, mechanical, photocopying, recording, or otherwise), without the prior permission of the publisher. Requests for permission should be directed to permissions@harvardbusiness.org, or mailed to Permissions, Harvard Business School Publishing, 60 Harvard Way, Boston, Massachusetts 02163.
本书中引用的网址在本书出版时是实时且正确的,但可能会发生变化。
The web addresses referenced in this book were live and correct at the time of the book’s publication but may be subject to change.
美国国会图书馆出版数据编目
Library of Congress Cataloging-in-Publication Data
名称:哈佛商业评论出版社,发行机构。
Names: Harvard Business Review Press, issuing body.
标题:生成人工智能。
Title: Generative AI.
其他标题:生成人工智能(哈佛商业评论出版社)|您需要从《哈佛商业评论》获得的见解。
Other titles: Generative AI (Harvard Business Review Press) | Insights you need from Harvard Business Review.
描述:马萨诸塞州波士顿:哈佛商业评论出版社,[2023] |系列:您需要的见解系列|包括索引。
Description: Boston, Massachusetts : Harvard Business Review Press, [2023] | Series: Insights you need series | Includes index.
标识符:LCCN 2023029121(打印)| LCCN 2023029122(电子书)| ISBN 9781647826390(平装本)| ISBN 9781647826406(epub)
Identifiers: LCCN 2023029121 (print) | LCCN 2023029122 (ebook) | ISBN 9781647826390 (paperback) | ISBN 9781647826406 (epub)
主题:LCSH:人工智能。 |业务——数据处理。 |商业上的成功。 |工业管理。
Subjects: LCSH: Artificial intelligence. | Business—Data processing. | Success in business. | Industrial management.
分类:LCC HD30.2 .G44 2023(打印)| LCC HD30.2(电子书)| DDC 658.4/0380285—dc23/eng/20230925
Classification: LCC HD30.2 .G44 2023 (print) | LCC HD30.2 (ebook) | DDC 658.4/0380285—dc23/eng/20230925
LC记录可在https://lccn.loc.gov/2023029121获取
LC record available at https://
LC 电子书记录可在https://lccn.loc.gov/2023029122获取
LC ebook record available at https://
国际标准书号:978-1-64782-639-0
ISBN: 978-1-64782-639-0
电子书号:978-1-64780-640-6
eISBN: 978-1-64780-640-6
作者:大卫·C·埃德尔曼和马克·亚伯拉罕
by David C. Edelman and Mark Abraham
它来了。生成式人工智能将改变我们与所有软件交互方式的本质。考虑到有多少品牌在与客户互动方面拥有重要的软件组件,生成式人工智能将推动和区分更多品牌的竞争方式。
It’s coming. Generative AI will change the nature of how we interact with all software. And given how many brands have significant software components in how they interact with customers, generative AI will drive and distinguish how more brands compete.
在之前的《哈佛商业评论》文章中,我们讨论了客户信息的使用如何带来差异化品牌体验。1现在,借助生成式人工智能,个性化将走得更远,根据客户希望的流程定制数字交互的各个方面,而不是产品设计师设想的塞入更多菜单和功能的方式。当软件跟随客户时,它将超越品牌产品的严格界限。您需要为客户想做的事情提供解决方案。解决他们所需的全部问题,并帮助他们完成实现目标的整个旅程,即使这意味着与外部合作伙伴联系、重新思考产品的定义以及开发底层数据和技术架构以连接解决方案中涉及的所有内容。
In a previous HBR piece, we discussed how the use of one’s customer information is already differentiating branded experiences.1 Now, with generative AI, personalization will go even further, tailoring all aspects of digital interaction to how the customer wants it to flow, not how product designers envision cramming in more menus and features. As the software follows the customer, it will go to places that range beyond the tight boundaries of a brand’s product. You will need to offer solutions to things the customer wants to do. Solve the full package of what they need and help them through their full journey to get there, even if it means linking to outside partners, rethinking the definition of your offerings, and developing the underlying data and tech architecture to connect everything involved in the solution.
生成式人工智能可以创建(生成)文本、语音、图像、音乐、视频,尤其是代码。当这种能力与一个人自己的信息源结合起来并用于定制交互的时间、内容和方式时,该人完成工作的容易程度和软件的可访问性就会大大提高。谷歌以及现在大多数生成式人工智能系统(例如 ChatGPT 和 DALL-E 2)的核心简单输入问题框将为更多系统提供动力。告别软件下拉菜单以及它们对您的使用方式设置的固有指导限制。相反,您只会看到“您想要什么今天做什么?当您输入答案时,该软件可能会根据您上次所做的事情、当前上下文中的触发因素以及您已存储在系统中作为核心目标的知识来提供一些建议;例如,“为旅行存钱”、“改造我们的厨房”、“为有特殊饮食需求的五口之家管理膳食计划。”
Generative AI can create—generate—text, speech, images, music, video, and especially code. When that capability is joined with a feed of a person’s own information and used to tailor the when, what, and how of an interaction, then the ease with which that person can get things done and the broadening accessibility of software goes up dramatically. The simple input question box that stands at the center of Google—and now of most generative AI systems, such as in ChatGPT and DALL-E 2—will power more systems. Say goodbye to software drop-down menus and the inherently guided restrictions they place on how you use them. Instead, you’ll just see “What do you want to do today?” And when you type in your answer, the software will likely offer some suggestions, drawing on its knowledge of what you did last time, what your triggers are in your current context, and what you’ve already stored in the system as your core goals; for example, “save for a trip,” “remodel our kitchen,” “manage meal plans for my family of five with special dietary needs.”
没有了传统软件界面的界限,消费者不会关心软件背后的品牌是否有局限性。我们的互动方式和期望将会发生巨大的变化,而且会更加民主化。
Without the boundaries of a conventional software interface, consumers won’t care whether the brand behind the software has limitations. The change in how we interact and what we expect will be dramatic—and dramatically more democratizing.
对生成式人工智能的大部分炒作都集中在它生成文本、图像和声音的能力上,但它也可以创建代码来自动化操作并促进提取外部和内部数据。通过生成响应命令的代码,它创建了一个快捷方式,将用户从命令带到简单地完成工作的操作。甚至对应用程序中存储的数据提出疑问和分析(例如,“过去 90 天内我没有致电过的联系人是谁?”或“下次我安排在纽约吃晚餐的时间是什么时候?”) ?”)将很容易处理。现在要回答这些问题,我们必须进入应用程序并从应用程序外部收集数据(可能是手动的)本身。现在可以识别查询、创建代码、对可能性进行排名并生成最佳答案。以毫秒为单位。
So much of the hype on generative AI has focused on its ability to generate text, images, and sounds, but it also can create code to automate actions and facilitate pulling in external and internal data. By generating code in response to a command, it creates a shortcut that takes a user from a command to an action that simply gets the job done. Even questions about and analyses of the data stored in an application (e.g., “Who are the contacts I have not called in the last 90 days?” or “When is the next time I am scheduled to be in NYC with an opening for dinner?”) will be easily handled. To answer such questions now, we have to go into an application and gather data (possibly manually) from outside of the application itself. Now the query can be recognized, code created, possibilities ranked, and the best answer generated. In milliseconds.
这极大地简化了我们与当今应用程序的交互方式。它还使更多品牌能够构建应用程序作为其价值主张的一部分:“考虑到天气、交通情况以及我和谁在一起,给我一个下午的旅游行程,并提供持续的指南,并且可以购买任何提前买票可以免排队。” “这是我的预算,这是我现在的浴室的五张照片,这是我想要的。现在给我一个改造设计,一个完整的计划,以及将其招标的能力。”谁将创造这些能力?实力雄厚的科技公司?已经在相关类别中建立关系的品牌?新的、专注的颠覆者?游戏才刚刚开始,但所需的能力和商业理念已经成型。
This drastically simplifies how we interact with what we think of as today’s applications. It also enables more brands to build applications as part of their value proposition: “Given the weather, traffic, and who I’m with, give me a tourist itinerary for the afternoon, with an ongoing guide, and the ability to just buy any tickets in advance to skip any lines.” “Here’s my budget, here are five pictures of my current bathroom, here’s what I want from it. Now give me a renovation design, a complete plan for doing it, and the ability to put it out for bid.” Who will create these capabilities? Powerful tech companies? Brands that already have relationships in their relevant categories? New, focused disruptors? The game is just starting, but the needed capabilities and business philosophies are already taking shape.
在生成式人工智能和所有其他不断发展的人工智能系统激增的世界中,构建产品需要关注尽可能广泛的数据池视图、可以实现的旅程以及它们带来的风险。
In a world where generative AI and all the other evolving AI systems proliferate, building an offering requires focusing on the broadest possible view of your pool of data, of the journeys you can enable, and the risks they raise.
解决客户的完整需求需要从整个公司的信息中提取信息,并且可能超出您的范围。对于大多数应用程序(实际上对于大多数 IT 部门来说)来说,最大的挑战之一是将来自不同系统的数据汇集在一起。许多人工智能系统可以编写理解两个不同数据库模式所需的代码,并将它们集成到一个存储库中,这可以节省标准化数据模式的几个步骤。人工智能团队仍然需要投入时间进行数据清理和数据治理(可以说更是如此);例如,调整关键数据特征的正确定义。然而,有了人工智能能力,整合数据过程中的后续步骤就会变得更加容易。
Solving for a customer’s complete needs will require pulling from information across your company and likely beyond your boundaries. One of the biggest challenges for most applications—and actually, for most IT departments—is bringing together data from disparate systems. Many AI systems can write the code needed to understand the schemas of two different databases and integrate them into one repository, which can save several steps in standardizing data schemas. AI teams still need to dedicate time for data cleansing and data governance (arguably even more so); for example, aligning on the right definitions of key data features. However, with AI capabilities in hand, the next steps in the process to bring the data together become easier.
例如,Narrative AI 提供了一个购买和销售数据的市场,以及数据协作软件,使公司只需单击一下即可将数据从任何地方导入到自己的存储库中,并与他们的模式保持一致。来自整个公司、合作伙伴或数据销售商的数据可以被集成,然后用于快速建模。
Narrative AI, for example, offers a marketplace for buying and selling data, along with data collaboration software that allows companies to import data from anywhere into their own repositories, aligned to their schema, with merely a click. Data from across a company—or from partners or from sellers of data—can be integrated and then used for modeling in a flash.
将专有数据与公共数据、其他可用人工智能工具的数据以及来自许多外部的数据相结合各方可以极大地提高人工智能理解环境、预测所询问内容以及拥有更广泛的执行命令的能力。
Combining proprietary data with public data, data from other available AI tools, and data from many external parties can serve to dramatically improve the AI’s ability to understand one’s context, predict what is being asked, and have a broader pool from which to execute a command.
然而,“垃圾进,垃圾出”的旧规则仍然适用。尤其是在集成第三方数据时,在将其集成到底层数据集之前,与内部数据进行交叉检查准确性非常重要;例如,一个时尚品牌最近发现,从第三方来源购买的性别数据有 50% 的情况与其内部数据不相符。来源和可靠性很重要。
The old rule around “garbage in, garbage out” still applies, however. Especially when it comes to integrating third-party data, it is important to cross-check the accuracy with internal data before integrating it into the underlying dataset; for example, one fashion brand recently found that gender data purchased from a third-party source didn’t match its internal data 50% of the time. Source and reliability matter.
由于对客户在输入框中可以提出的要求没有明显的限制,人工智能需要有指导方针来确保它对超出其能力或不适当的事情做出适当的反应。这加大了对规则层的高度关注,经验丰富的设计师、营销人员和业务决策者在规则层设置人工智能优化的目标参数。
Without obvious restrictions on what a customer can ask for in an input box, the AI needs to have guidelines to ensure that it responds appropriately to things beyond its means or that are inappropriate. This amplifies the need for a sharp focus on the rules layer, where the experienced designers, marketers, and business decision-makers set the target parameters for the AI to optimize.
例如,对于利用人工智能来决定与客户进行“下一个最佳对话”的航空公司品牌,我们围绕哪些产品可以是向哪些客户推销,哪些副本可以在哪些司法管辖区使用,以及有关反重复的规则,以确保客户不会受到不相关消息的轰炸。
For example, for an airline brand that leveraged AI to decide on the “next best conversation” to engage in with customers, we set rules around what products could be marketed to which customers, what copy could be used in which jurisdictions, and rules around antirepetition to ensure customers didn’t get bombarded with irrelevant messages.
在生成人工智能时代,这些限制变得更加重要。正如这些解决方案的先驱者所发现的那样,当机器“故障”并产生无意义的解决方案时,客户会很快指出。因此,最好的方法将从小处开始,并针对特定的解决方案进行定制,其中可以严格定义规则,并且人类决策者将能够为边缘情况设计规则。
These constraints become even more critical in the era of generative AI. As pioneers of these solutions are finding, customers will be quick to point out when the machine “breaks” and produces nonsensical solutions. The best approaches will therefore start small and be tailored to specific solutions where the rules can be tightly defined and human decision-makers will be able to design rules for edge cases.
客户只会询问他们需要什么,并会寻求最简单和/或最具成本效益的方式来完成它。客户真正的最终目标是什么?你能在多大程度上满足它?凭借在各方之间更轻松地移动信息的能力,您可以建立数据和执行操作的合作伙伴关系,以帮助客户完成整个旅程;因此,您的业务关系生态系统将使您的品牌脱颖而出。
Customers will just ask for what they need and will seek the simplest and/or most cost-effective way to get it done. What is the true end goal of the customer? How far can you get in satisfying it? With the ability to move information more easily across parties, you can build partnerships for data and for execution of the actions to help a customer through their journey; therefore, your ecosystem of business relationships will differentiate your brand.
HubSpot 的首席技术官兼创始人 Dharmesh Shah 在令人印象深刻的演示中展示了 HubSpot 如何将生成式 AI 融入 ChatSpot,并阐述了他们如何将 HubSpot 的功能与 OpenAI 以及其他工具相结合。2他不仅展示了 HubSpot 的界面简化为只有一个文本输入提示,而且还展示了远远超出 HubSpot 当前边界的新功能。想要向目标公司的业务领导者发送电子邮件的销售人员可以使用 ChatSpot 对公司、目标业务领导者进行研究,然后起草一封电子邮件,其中包含来自研究的信息以及对目标公司的了解。销售人员自己。然后,HubSpot 的系统可以编辑、发送和跟踪生成的电子邮件草稿,目标业务负责人会自动将所有相关信息输入到联系人数据库中。
In his impressive demo of how HubSpot is incorporating generative AI into ChatSpot, Dharmesh Shah, HubSpot’s CTO and founder, lays out how they are mingling the capabilities of HubSpot with OpenAI, and with other tools.2 Not only does he show HubSpot’s interface reduced to just a single text entry prompt, but he also shows new capabilities that extend well beyond HubSpot’s current borders. A salesperson seeking to send an email to a business leader at a target company can use ChatSpot to perform research on the company, on the target business leader, and then draft an email that incorporates both information from the research and from what it knows about the salesperson themselves. The resulting email draft can then be edited, sent, and tracked by HubSpot’s system, and the target business leader automatically entered into a contact database with all associated information.
连接信息、自动代码创建和生成输出的力量正在引导许多其他公司扩展其边界,不是传统的垂直或水平扩展,而是旅程扩展。当您可以基于简单的用户命令提供服务时,这些命令将反映客户的真正目标和他们寻求的整体解决方案,而不仅仅是您以前可能处理过的一个小组件。
The power of connected information, automatic code creation, and generated output is leading many other companies to extend their borders, not as conventional vertical or horizontal expansion, but as journey expansion. When you can offer services based on a simple user command, those commands will reflect the customer’s true goal and the total solution they seek, not just a small component that you may have been dealing with before.
解决这些更广泛的需求不可避免地将让您建立新型的合作伙伴关系。当您构建端到端旅程能力时,如何构建这些业务关系将成为战略的重要新基础。他们的数据可信度如何、许可程度如何、及时性如何、全面性如何、偏差如何?他们将如何使用您的品牌发送的数据?你们的关系、质量控制和数据集成的基础是什么?预先商定的特权合作伙伴关系?简单的供应商关系?您如何对更广泛的服务收费?相关各方将如何获得分成?
Solving for those broader needs inevitably will pull you into new kinds of partner relationships. As you build out your end-to-end journey capabilities, how you construct those business relationships will be critical new bases for strategy. How trustworthy, how well permissioned, how timely, how comprehensive, how biased is their data? How will they use data your brand sends out? What is the basis of your relationship, quality control, and data integration? Prenegotiated privileged partnerships? A simple vendor relationship? How are you charging for the broader service, and how will the parties involved get their cut?
与谷歌等搜索品牌、亚马逊等电子商务市场以及 Tripadvisor 等推荐引擎如何成为卖家的门户类似,如果更多品牌能够提供优质合作伙伴、体验个性化和简单性,他们就可以成为客户旅程的前端导航者。 CVS 可以成为一个完整的健康网络协调员,健康提供者、健康技术、健康服务、制药和其他支持服务将融入其中。当它的应用程序可以让你简单地问:“你怎样才能帮助我减掉30磅?”或“你能如何帮助我应对日益严重的关节炎?”它可以生成然后完全管理的端到端程序,通过向您发出提示和在其网络中传递信息,将成为 CVS 作为一个品牌如何建立忠诚度、捕获您的数据并利用这些数据来不断提高服务质量的关键差异化因素。
Similar to how search brands like Google, e-commerce marketplaces like Amazon, and recommendation engines like Tripadvisor become gateways for sellers, more brands can become front-end navigators for a customer journey if they can offer quality partners, experience personalization, and simplicity. CVS could become a full health network coordinator that health providers, health tech, wellness services, pharma, and other support services will plug into. When its app can let you simply ask: “How can you help me lose 30 pounds?” or “How can you help me deal with my increasing arthritis?” the end-to-end program it can generate and then completely manage, through prompts to you and information passed around its network, will be a critical differentiator in how CVS, as a brand, builds loyalty, captures your data, and uses that to keep increasing service quality.
您管理数据的方式成为您品牌的一部分,您的客户的结果将存在您应该找出并减轻的边缘案例和偏见风险。我们都在读一些故事,讲述人们如何将 ChatGPT 等生成式人工智能系统推向极端,并得到应用程序开发人员所说的“幻觉”或奇怪的反应。我们还看到一些回应是对错误事实的可靠断言。或者来自有偏见的数据基础的反应可能会给某些人群带来危险的结果。公司还因未经客户许可与其他方共享私人客户信息而受到“指责”,这显然不是为了客户本身的利益。
The way you manage data becomes part of your brand, and the outcomes for your customers will have edge cases and bias risks that you should seek out and mitigate. We are all reading stories of how people are pushing generative AI systems, such as ChatGPT, to extremes and getting back what the application’s developers call “hallucinations,” or bizarre responses. We are also seeing responses that come back as solid assertions of wrong facts. Or responses that are derived from biased bases of data that can lead to dangerous outcomes for some populations. Companies are also getting “outed” for sharing private customer information with other parties without the customers’ permission—clearly not for the benefit of their customers per se.
从核心数据到数据管理,再到生成人工智能输出的性质,风险将不断成倍增加。一些公司为首席客户保护官设立了新职位其职责是领先于潜在的风险场景,更重要的是,为产品经理开发和管理系统的方式建立保障措施。公司董事会的风险委员会已经聘请了新的专家并扩大了他们的职权范围,但还需要先发制人地采取更多行动。测试数据池的偏差;了解数据的来源及其版权、准确性和隐私风险;管理明确的客户权限;限制信息的去向;不断测试应用程序是否存在客户可能将其推向极端的边缘情况,这些都是公司应该纳入其核心产品管理规则的关键流程,并添加高层管理人员通常必须提出的问题。董事会希望看到有关此类活动的仪表板,其他外部监管机构,包括代表法律挑战的律师,也会要求它们。
The risks—from the core data, to the management of data, to the nature of the output of the generative AI—will keep multiplying. Some companies have created new positions for chief customer protection officers whose role is to stay ahead of potential risk scenarios and, more importantly, to build safeguards into how product managers are developing and managing the systems. Risk committees on corporate boards are already bringing in new experts and expanding their purviews, but more action has to happen preemptively. Testing data pools for bias; understanding where data came from and its copyright, accuracy, and privacy risks; managing explicit customer permissions; limiting where information can go; and constantly testing the application for edge cases where customers could push it to extremes are all critical processes companies should build into their core product management discipline and add onto the questions that top management routinely has to ask. Boards will expect to see dashboards on these kinds of activities, and other external watchdogs, including lawyers representing legal challenges, will demand them as well.
这值得么?风险将不断增加,建立管理这些风险的结构的成本将是真实的。我们才刚刚开始弄清楚如何大规模管理偏见、准确性、版权、隐私和操纵排名风险。如果需要进行某种审计,系统的不透明性常常使得无法解释结果是如何发生的。
Is it worth it? The risks will constantly multiply, and the costs of creating structures to manage those risks will be real. We’ve only begun to figure out how to manage bias, accuracy, copyright, privacy, and manipulated ranking risks at scale. The opacity of the systems often makes it impossible to explain how an outcome happened if some kind of audit is necessary.
尽管如此,生成式人工智能的能力不仅是可用的——它们还是增长最快的类别曾经的应用程序。随着挖掘数据池的增加以及并行人工智能系统以及“循环中的人类”努力寻找和纠正这些令人讨厌的幻觉,它们的准确性将会提高。
Nonetheless, the capabilities of generative AI are not only available—they are the fastest-growing class of applications ever. Their accuracy will improve as the pool of tapped data increases and as parallel AI systems as well as “humans in the loop” work to find and remedy those nasty hallucinations.
新的和现有的应用程序访问的简单性、个性化和民主化的潜力不仅会吸引数百家初创公司,还会吸引许多知名品牌创建新的人工智能产品。如果品牌能做的不仅仅是取悦客户,而且能够比以往更多地满足他们的旅程要求,并且以激发信任的方式做到这一点,那么他们就可以从他们可以提供的服务中开辟新的收入来源超越他们目前狭窄的边界。对于正确的用例,速度和个性化可能值得溢价。但更有可能的是,人工智能的自动化能力将降低整个系统的成本,并给所有参与者带来压力,要求他们有效管理并进行相应的竞争。
The potential for simplicity, personalization, and democratization of access to new and existing applications will not only pull in hundreds of startups but also tempt many established brands into creating new AI-forward offerings. If brands can do more than just amuse a customer and actually take them through more of the requirements of their journey than ever before—and do so in a way that inspires trust—they could open up new sources of revenue from the services they can enable beyond their currently narrow borders. For the right use cases, speed and personalization could possibly be worth a price premium. But more likely, the automation abilities of AI will pull costs out of the overall system and put pressure on all participants to manage efficiently and compete accordingly.
我们现在正在品牌与其客户之间开启真正的新对话。字面对话——不像数字交互早期发生的深奥描述。现在我们正在来回讨论。把事情做好。一起。简单地。以值得信赖的方式。正是客户想要的。竞赛正在进行,看哪些品牌可以做到这一点。
We are now opening up a real new dialogue between brands and their customers. Literal conversations—not like the esoteric descriptions of what happened in the earlier days of digital interaction. Now we are talking back and forth. Getting things done. Together. Simply. In a trustworthy fashion. Just how the customer wants it. The race is on to see which brands can deliver.
要点
TAKEAWAYS
生成式人工智能将改变企业开发以客户为中心的产品的方式,从而将个性化和定制提升到新的水平。
Generative AI will change the way businesses develop customer-focused products, leading to new levels of personalization and customization.
✓ 生成式人工智能可以“生成”文本、语音、图像、音乐、视频和代码。
✓ Generative AI can “generate” text, speech, images, music, video, and code.
✓ 当该功能与客户自己的信息源相结合时,品牌在客户旅程中为客户提供帮助的便利性将大大增加。
✓ When that capability is joined with a feed of a customer’s own information, the ease by which brands can assist customers along their journeys increases dramatically.
✓ 使用人工智能的公司应该收集并整合多个来源的数据,但他们必须意识到并非所有数据都可靠。
✓ Corporations using AI should collect and combine data from several sources, but they must be aware that not all of them may be reliable.
✓ 必须制定规则以保证人工智能做出适当的响应。需要减少和管理数据偏差风险。
✓ Rules must be developed to guarantee that the AI responds appropriately. Data bias risks need to be reduced and managed.
1 . David C. Edelman 和 Mark Abraham,“人工智能时代的客户体验”,《哈佛商业评论》,2022 年 3 月至 4 月, https://
1. David C. Edelman and Mark Abraham, “Customer Experience in the Age of AI,” Harvard Business Review, March–April 2022, https://
2 . Dharmesh Shah,“向ChatSpot.ai打个招呼:一体式 AI 支持的聊天应用程序,让您成长得更好”,YouTube 视频,2023 年 3 月 6 日, https :
2. Dharmesh Shah, “Say Hi to ChatSpot.ai: The All-in-One A.I. Powered Chat App for Growing Better,” YouTube video, March 6, 2023, https://
改编自 hbr.org 上发布的内容,2023 年 4 月 12 日(产品#H07KSV)。
Adapted from content posted on hbr.org, April 12, 2023 (product #H07KSV).
作者:辛·S·莱文 (Sheen S. Levine) 和丁卡·贾恩 (Dinkar Jain)
by Sheen S. Levine and Dinkar Jain
2022年,当OpenAI推出ChatGPT时,行业观察家的反应是褒贬不一。我们听说该技术如何废除计算机程序员、教师、金融交易员和分析师、图形设计师和艺术家。由于担心人工智能会毁掉大学论文,大学纷纷修改课程。有人说,也许最直接的影响是 ChatGPT 可以重塑甚至取代传统的互联网搜索引擎。搜索和相关广告带来了谷歌的绝大多数收入。聊天机器人会杀死谷歌吗?
In 2022, when OpenAI introduced ChatGPT, industry observers responded with both praise and worry. We heard how the technology can abolish computer programmers, teachers, financial traders and analysts, graphic designers, and artists. Fearing that AI would kill the college essay, universities rushed to revise curricula. Perhaps the most immediate impact, some said, was that ChatGPT could reinvent or even replace the traditional internet search engine. Search and the related ads bring in the vast majority of Google’s revenue. Will chatbots kill Google?
ChatGPT 是机器学习技术的出色展示,但它作为独立服务几乎不可行。为了发挥其技术实力,OpenAI 需要一个合作伙伴。因此,当该公司迅速宣布与微软达成协议时,我们并不感到惊讶。这家人工智能初创公司和传统科技公司的联合最终可能会对谷歌的主导地位构成可信的威胁,从而加大人工智能军备竞赛的风险。它还提供了一个教训,告诉我们哪些力量将决定哪些公司在部署这项技术时会蓬勃发展,哪些公司会步履蹒跚。
ChatGPT is a remarkable demonstration of machine learning technology, but it is barely viable as a stand-alone service. To appropriate its technological prowess, OpenAI needed a partner. So we weren’t surprised when the company quickly announced a deal with Microsoft. The union of the AI startup and the legacy tech company may finally pose a credible threat to Google’s dominance, upping the stakes in the AI arms race. It also offers a lesson in the forces that will dictate which companies will thrive and which will falter in deploying this technology.
为了了解是什么迫使 OpenAI 与 Bing 结盟(以及为什么 Google 仍可能取得胜利),我们需要考虑这项技术与过去的发展(例如电话或 Uber 或 Airbnb 等市场平台)有何不同。在每一个例子中,网络效应——产品的价值随着用户的增加而上升——在决定这些产品如何增长以及哪些公司取得成功方面发挥了重要作用。像 ChatGPT 这样的生成式人工智能服务会受到类似但又不同的网络效应的影响。为了选择适用于人工智能的策略,管理者和企业家必须掌握这些新型人工智能网络效应如何发挥作用。
To understand what compelled OpenAI to ally itself with Bing (and why Google may still triumph), we consider how this technology differs from past developments such as the telephone or market platforms like Uber or Airbnb. In each of those examples, network effects—where the value of a product goes up as it gains users—played a major role in shaping how those products grew and which companies succeeded. Generative AI services like ChatGPT are subject to similar, but distinct, kinds of network effects. To choose strategies that work with AI, managers and entrepreneurs must grasp how these new kinds of AI network effects work.
AI的价值在于准确的预测和建议。但与依赖将供应(如电力或人力资本)转化为产出(如照明或税务建议)的传统产品和服务不同,人工智能需要大量数据集,这些数据集必须通过来回的客户互动保持最新。为了保持竞争力,人工智能运营商必须收集数据、分析数据、提供预测,然后寻求反馈以完善后续建议。系统的价值取决于用户提供的数据,并且随着数据的增加而增加。
AI’s value lies in accurate predictions and suggestions. But unlike traditional products and services, which rely on turning supplies (like electricity or human capital) into outputs (like light or tax advice), AI requires large datasets that must be kept fresh through back-and-forth customer interactions. To remain competitive, an AI operator must corral data, analyze it, offer predictions, and then seek feedback to sharpen subsequent suggestions. The value of the system depends on—and increases with—data that arrives from users.
该技术的性能(准确预测和建议的能力)取决于称为数据网络效应的经济原理(有些人更喜欢数据驱动的学习)。这些与熟悉的直接网络效应不同,例如随着用户的增长而使电话变得更有价值的效应(因为您可以呼叫的人更多)。它们也不同于间接或二阶网络效应,间接或二阶网络效应描述了越来越多的买家如何邀请更多卖家加入平台,反之亦然——当更多卖家出现时,在 Etsy 上购物或在 Airbnb 上预订变得更具吸引力。
The technology’s performance—its ability to accurately predict and suggest—hinges on an economic principle called data network effects (some prefer data-driven learning). These are distinct from the familiar direct network effects, like those that make a telephone more valuable as subscribers grow (because there are more people you can call). They are also different from indirect or second-order network effects, which describe how a growing number of buyers invites more sellers to a platform and vice versa—shopping on Etsy or booking on Airbnb becomes more attractive when more sellers are present.
数据网络效应是一种新的形式:就像越熟悉的效应一样,用户越多,就越有价值科技群岛。但在这里,价值并不来自同行的数量(如电话)或许多买家和卖家的存在(如 Etsy 等平台)。相反,其影响源于技术的本质:人工智能通过强化学习(预测和反馈)来改进。随着智能程度的提高,系统可以做出更好的预测,增强其实用性,吸引新用户并保留现有用户。更多的用户意味着更多的响应,从而进一步提高预测的准确性,从而形成良性循环。
Data network effects are a new form: Like the more familiar effects, the more users, the more valuable the technology is. But here, the value comes not from the number of peers (as with the telephone) or the presence of many buyers and sellers (as on platforms like Etsy). Rather, the effects stem from the nature of the technology: AI improves through reinforcement learning—predictions followed by feedback. As its intelligence increases, the system makes better predictions, enhancing its usefulness, attracting new users and retaining existing ones. More users mean more responses, which further prediction accuracy, creating a virtuous cycle.
以谷歌地图为例。它使用人工智能来推荐到达目的地的最快路线。这种能力取决于预测替代路径中的流量模式,这是通过利用来自许多用户的数据来实现的。使用该应用程序的人越多,它积累的历史数据和并发数据就越多。借助大量数据,谷歌可以将无数预测与实际结果进行比较:您是否在应用程序预测的时间到达?为了完善预测,该应用程序还需要您的印象:说明有多好?随着客观事实和主观评论的积累,网络效应就会开始发挥作用。这些效应可以改善预测并提升应用程序对用户和 Google 的价值。
Take, for example, Google Maps. It uses AI to recommend the fastest route to your destination. This ability hinges on anticipating the traffic patterns in alternative paths, which it does by drawing on data that arrives from many users. The more people use the app, the more historical and concurrent data it accumulates. With piles of data, Google can compare myriad predictions to actual outcomes: Did you arrive at the time predicted by the app? To perfect the predictions, the app also needs your impressions: How good were the instructions? As objective facts and subjective reviews accumulate, network effects kick in. These effects improve predictions and elevate the app’s value for users—and for Google.
一旦我们了解了网络效应如何驱动人工智能,我们就可以想象该技术所需的新策略。
Once we understand how network effects drive AI, we can imagine the new strategies the technology requires.
我们先从 OpenAI 和微软的联姻说起。当我们对 ChatGPT 进行 Beta 测试时,我们对其创造性、人性化的响应印象深刻,但也意识到它陷入了困境。它依赖于 2021 年最后一次收集的大量数据,但缺少近期事件和当前天气等信息。更糟糕的是,它缺乏强大的反馈循环:当建议是幻觉时,你无法敲响警钟(该公司确实允许“拇指朝下”的回应)。然而,通过与微软的联系,OpenAI 找到了一种测试预测的方法。 Bing 用户提出的问题以及他们如何评价答案对于更新和改进 ChatGPT 至关重要。我们想象,下一步是微软为其算法提供其维护的大量用户数据云。当 ChatGPT 消化无数的 Excel 工作表、PowerPoint 演示文稿、Word 文档和 LinkedIn 简历时,它会更好地重新创建它们,让办公室居民感到高兴(或害怕)。
Let’s start with the marriage of OpenAI and Microsoft. When we beta-tested ChatGPT, we were impressed with its creative, humanlike responses, but recognized it was stuck. It relied on a bunch of data last collected in 2021, but was missing information such as recent events and the current weather. Even worse, it lacked a robust feedback loop: You couldn’t ring the alarm bell when suggestions were hallucinatory (the company did allow a “thumbs down” response). Yet by linking to Microsoft, OpenAI found a way to test the predictions. What Bing users ask—and how they rate the answers—is crucial to updating and improving ChatGPT. The next step, we imagine, is Microsoft feeding the algorithm with the vast cloud of user data it maintains. As it digests untold numbers of Excel sheets, PowerPoint presentations, Word documents, and LinkedIn résumés, ChatGPT will get better at recreating them, to the joy (or horror) of office dwellers.
这里至少有三个广泛的教训。
There are at least three broad lessons here.
当您考虑人工智能网络效应时,您可以更好地了解该技术的未来。您还可以看到这些效应与其他网络效应一样如何使富人变得更加富有。人工智能背后的动力意味着早期推动者可能会获得丰厚的回报,而追随者无论速度有多快,都可能会被留在场外。这还意味着,当一个人能够使用人工智能算法和数据流时,优势会随着时间的推移而积累,并且不会轻易被超越。对于高管、企业家、政策制定者和其他所有人来说,人工智能最好的(和最坏的)尚未到来。
When you consider AI network effects, you can better understand the technology’s future. You can also see how these effects, like other network effects, tend to make the rich even richer. The dynamics behind AI mean that early movers may be rewarded handsomely and followers, however quick, may be left on the sidelines. It also implies that when one has access to an AI algorithm and a flow of data, advantages accumulate over time and can’t be easily surmounted. For executives, entrepreneurs, policy makers, and everyone else, the best (and worst) about AI is yet to come.
要点
TAKEAWAYS
数据网络效应使人工智能变得更智能、更强大,并随着时间的推移不断完善和提高其准确性。人工智能可以利用客户互动、预测和反馈的力量,从每个用户体验中收集的数据积累中获益。
Data network effects have allowed AI to become smarter and more powerful, refining and improving its accuracy over time. AI can gain from an accumulation of data collected through each user’s experience by utilizing the power of customer interactions, predictions, and feedback.
✓ 反馈对于生成式人工智能算法的执行至关重要。如果没有持续的客户交互流,即使是最好的算法也无法长期保持智能。
✓ Feedback is crucial for generative AI algorithms to perform. Without constant streams of customer interactions, even the best algorithm won’t remain smart for long.
✓ 公司应定期进行细致的信息收集,以最大限度地发挥数据网络效应的优势。
✓ Companies should routinize meticulous gathering of information to maximize the benefits of data network effects.
✓ 每个人都应该考虑他们共享的数据。事实和反馈对于建立更好的预测至关重要,但您的数据的价值可能会被其他人获取。
✓ Everyone should consider the data they share. Facts and feedback are essential for building better predictions, but the value of your data can be captured by someone else.
改编自 hbr.org 上发布的内容,2023 年 3 月 14 日(产品#H07JCQ)。
Adapted from content posted on hbr.org, March 14, 2023 (product #H07JCQ).
作者:马克·扎奥-桑德斯和马克·拉莫斯
by Marc Zao-Sanders and Marc Ramos
关于大型语言模型 (LLM) 的影响存在大量的炒作和猜测,例如 OpenAI 的 ChatGPT、Google 的 Bard、Anthropic 的 Claude、Meta 的 LLaMA 和 GPT-4 。尤其是 ChatGPT,在两个月内用户数量就达到了 1 亿,使其成为有史以来增长最快的消费者应用程序。
There has been a huge amount of hype and speculation about the implications of large language models (LLMs) such as OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude, Meta’s LLaMA, and GPT-4. ChatGPT, in particular, reached 100 million users in two months, making it the fastest-growing consumer application of all time.
目前尚不清楚法学硕士将产生什么样的影响,而且意见也存在很大差异。许多专家认为,法学硕士根本没有什么影响(早期的学术研究表明法学硕士的能力仅限于形式语言能力),或者即使是接近无限量的基于文本的培训数据仍然受到严重限制。沃顿商学院教授伊森·莫里克(Ethan Mollick)等其他人则持相反观点:“了解这一变化的重要性并首先采取行动的企业将获得相当大的优势。” 1
It isn’t clear yet just what kind of impact LLMs will have, and opinions vary hugely. Many experts argue that LLMs will have little impact at all (early academic research suggests that the capability of LLMs is restricted to formal linguistic competence) or that even a near-infinite volume of text-based training data is still severely limiting. Others, such as Wharton professor Ethan Mollick, argue the opposite: “The businesses that understand the significance of this change—and act on it first—will be at a considerable advantage.”1
我们现在所知道的是,生成式人工智能已经吸引了更广泛公众的想象力,并且它能够几乎立即生成初稿并产生想法。我们也知道它的准确性可能会很困难。
What we do know now is that generative AI has captured the imagination of the wider public and that it is able to produce first drafts and generate ideas virtually instantaneously. We also know that it can struggle with accuracy.
尽管这项新技术还存在一些悬而未决的问题,但公司现在正在寻找应用它的方法。有没有办法消除两极分化的争论、炒作和夸张,并清楚地思考这项技术将首先在哪里发挥作用?我们相信有。
Despite the open questions about this new technology, companies are searching for ways to apply it—now. Is there a way to cut through the polarizing arguments, hype, and hyperbole and think clearly about where the technology will hit home first? We believe there is.
关于风险,产生和消除不实和不准确的可能性有多大,破坏性有多大。受精?按需,除了当前的喧嚣之外,对这种输出的真正和可持续的需求是什么?
On risk, how likely and how damaging is the possibility of untruths and inaccuracies being generated and disseminated? On demand, what is the real and sustainable need for this kind of output, beyond the current buzz?
一起考虑这些变量很有用。在 2 × 2 矩阵中考虑它们可以对可能发生的情况提供更细致、一刀切的分析。事实上,不同行业和业务活动的风险和需求有所不同。我们在图 3-1中放置了一些常见的跨行业用例。
It’s useful to consider these variables together. Thinking of them in a 2 × 2 matrix provides a more nuanced, one-size-doesn’t-fit-all analysis of what may be coming. Indeed, risks and demands differ across different industries and business activities. We have placed some common cross-industry use cases in figure 3-1.
考虑一下您的业务职能或行业可能位于何处。对于您的用例,通过引入人工验证步骤可以降低多少风险?这会在多大程度上减缓进程并减少需求?
Think about where your business function or industry might sit. For your use case, how much is the risk reduced by introducing a step for human validation? How much might that slow down the process and reduce the demand?
左上角的方框——错误的后果相对较低,市场需求很高——将不可避免地发展得更快、更远。对于这些用例,公司有现成的动力去寻找解决方案,并且成功的障碍也较少。我们应该期望看到该技术的原始、即时利用以及第三方工具的结合,这些工具在其特定领域利用生成式人工智能及其 API。
The top-left box—where the consequence of errors is relatively low and market demand is high—will inevitably develop faster and further. For these use cases, there is a ready-made incentive for companies to find solutions, and there are fewer hurdles for their success. We should expect to see a combination of raw, immediate utilization of the technology as well as third-party tools that leverage generative AI and its APIs for their particular domain.
这种情况已经在营销领域发生,一些初创公司已经找到了应用法学硕士的创新方法来生成内容营销文案和创意,并取得了独角兽的地位。营销需要大量的创意产生和迭代、针对特定受众量身定制的消息传递以及生成丰富的文本消息可以吸引并影响观众。换句话说,有明确的用途和明确的需求。重要的是,还有大量的例子可以用来指导人工智能匹配风格和内容。另一方面,大多数营销文案并不注重事实,重要的事实可以在编辑时纠正。
This is happening already in marketing, where several startups have found innovative ways to apply LLMs to generate content marketing copy and ideas and have achieved unicorn status. Marketing requires a lot of idea generation and iteration, messaging tailored to specific audiences, and the production of text-rich messages that can engage and influence audiences. In other words, there are clear uses and demonstrated demand. Importantly, there’s also a wealth of examples that can be used to guide an AI to match style and content. On the other hand, most marketing copy isn’t fact-heavy, and the facts that are important can be corrected in editing.
图3-1
FIGURE 3-1
选择一个生成式人工智能项目
Picking a generative AI project
当您的公司决定从哪里开始探索生成式人工智能时,平衡风险和需求非常重要。思考这个问题的一种方法是问两个问题:“如果不实和不准确的信息产生并传播,会造成多大的损害?” (风险)以及“除了当前的喧嚣之外,对这种产出的真正和可持续的需求是什么?” (要求)。考虑使用此矩阵(包含常见的跨行业用例)来确定对您的公司最有价值、风险最小的应用程序。
As your company decides where to start exploring generative AI, it’s important to balance risk and demand. One way to think about that is to ask two questions: “How damaging would it be if untruths and inaccuracies were generated and disseminated?” (risk) and “What is the real and sustainable need for this kind of output, beyond the current buzz?” (demand). Consider using this matrix—populated with common, cross-industry use cases—to identify the most valuable, least-risky applications for your company.
查看矩阵,您可以发现还有其他机会受到较少的关注,例如学习。与营销一样,创建学习内容(出于我们的目的,让我们以内部企业学习工具为例)需要引人入胜且有效的文本,并清楚地了解受众的兴趣。还有可能可用于指导生成人工智能工具的内容。在现有文档的基础上,您可以要求它重写、综合和更新您必须的材料,以便更好地与不同的受众交流或使学习材料更适合不同的环境。
Looking at the matrix, you can find that there are other opportunities that have received less attention, for instance, learning. Like marketing, creating content for learning—for our purposes, let’s use the example of internal corporate learning tools—requires engaging and effective text and a clear understanding of its audience’s interests. There’s also likely content that can be used to guide a generative AI tool. Priming it with existing documentation, you can ask it to rewrite, synthesize, and update the materials you have to better speak to different audiences or to make learning material more adaptable to different contexts.
生成式人工智能的功能还可以让学习材料以不同的方式交付——融入日常工作流程或取代笨重的常见问题解答、庞大的知识中心和票务系统。
Generative AI’s capabilities could also allow learning materials to be delivered differently—woven into the flow of everyday work or replacing clunky FAQs, bulging knowledge centers, and ticketing systems.
上面高需求/低风险框中的其他用途遵循类似的逻辑:它们用于人们经常参与的任务,并且人工智能对事实的快速和松散的风险很低。以要求人工智能审阅文本为例:你可以给它提供草稿,给它一些指示(你想要更详细的版本、更柔和的语气、五点总结或如何使文本更简洁的建议),并查看其建议。作为第二双眼睛,该技术现在就可以使用了。如果你想要激发头脑风暴的想法——雇用现代多媒体设计师时要采取的步骤,或者给喜欢火车的 4 岁孩子买什么作为生日礼物——生成式人工智能将是一个快速、可靠和安全的选择,因为想法可能不会出现在最终产品中。
The other uses in the high-demand/low-risk box above follow similar logic: They’re for tasks where people are often involved, and the risk of AI playing fast and loose with facts are low. Take the examples of asking the AI to review text: You can feed it a draft, give it some instructions (you want a more detailed version, a softer tone, a five-point summary, or suggestions of how to make the text more concise), and review its suggestions. As a second pair of eyes, the technology is ready to use right now. If you want ideas to feed a brainstorm—steps to take when hiring a modern multimedia designer or what to buy a 4-year-old who likes trains for her birthday—generative AI will be a quick, reliable, and safe bet, as those ideas are likely not in the final product.
在矩阵中填写属于公司或团队工作一部分的任务可以帮助得出类似的相似之处。评估风险和需求并考虑特定任务的共同要素可以为您提供有用的起点,并帮助您建立联系并发现机会。它还可以帮助您了解哪些地方没有必要投入时间和资源。
Filling in the matrix with tasks that are part of your company’s or team’s work can help draw similar parallels. Assessing risk and demand and considering the shared elements of particular tasks can give you a useful starting point and help you draw connections and see opportunities. It can also help you see where it doesn’t make sense to invest time and resources.
其他三个象限不是你应该急于寻找生成式人工智能工具用途的地方。当需求较低时,人们就没有动力使用或开发该技术。今天,以莎士比亚海盗风格制作俳句可能会让我们开怀大笑,但这样的派对技巧不会让我们的注意力持续太久。在有需求但风险高的情况下,普遍的恐惧和监管将减缓进展的速度。考虑到您自己的 2 × 2 矩阵,您可以暂时将列出的用途放在一边。
The other three quadrants aren’t places where you should rush to find uses for generative AI tools. When demand is low, there’s little motivation for people to utilize or develop the technology. Producing haikus in the style of a Shakespearian pirate may make us laugh and drop our jaws today, but such party tricks will not keep our attention for very much longer. And in cases where there is demand but high risk, general trepidation and regulation will slow the pace of progress. Considering your own 2 × 2 matrix, you can put the uses listed there aside for the time being.
温和的警告:即使在企业学习中,正如我们所说,风险很低,风险仍然存在。就像人类一样,生成式人工智能也容易受到偏见和错误的影响。如果你认为生成式人工智能系统的输出很好,并立即将它们分发给你的全体员工,那么就会存在很大的风险。您在速度和质量之间取得适当平衡的能力将受到考验。
A mild cautionary note: Even in corporate learning where, as we have argued, the risk is low, there is risk. Generative AI is vulnerable to bias and errors, just as humans are. If you assume the outputs of a generative AI system are good to go and immediately distribute them to your entire workforce, there is plenty of risk. Your ability to strike the right balance between speed and quality will be tested.
因此将初始输出作为第一次迭代。通过一两个更详细的提示来改进它。然后自己调整输出,添加现实世界的知识、细微差别,甚至艺术性和幽默感,而在一段时间内,只有人类才拥有这些。
So take the initial output as a first iteration. Improve on it with a more detailed prompt or two. And then tweak that output yourself, adding the real-world knowledge, nuance, even artistry and humor that, for a little while longer, only a human has.
要点
TAKEAWAYS
生成式人工智能几乎能够立即生成初稿并产生想法,但它也可能面临准确性和道德问题。企业在追求回报的过程中应如何应对风险?
Generative AI is able to produce first drafts and generate ideas virtually instantaneously, but it can also struggle with accuracy and ethical problems. How should companies navigate the risks in their pursuit of its rewards?
✓ 在选择用例时,公司需要平衡风险(产生和传播不实信息和不准确信息的可能性有多大,破坏性有多大?)和需求(除了当前的喧嚣之外,对这种输出的真正和可持续的需求是什么? ?)。
✓ In picking use cases, companies need to balance risk (How likely and how damaging is the possibility of untruths and inaccuracies being generated and disseminated?) and demand (What is the real and sustainable need for this kind of output, beyond the current buzz?).
✓ 绘制风险和需求的 2 × 2 矩阵可以帮助企业选择最佳的生成式 AI 项目并提高成功机会。
✓ A 2 × 2 matrix that plots risk and demand can help companies choose the best generative AI projects and improve their chances of success.
✓ 公司应该进行适合矩阵高需求/低风险框的实验。其他三个象限不是公司应该急于寻找生成式人工智能工具用途的地方。
✓ Companies should run experiments that fit into the high-demand/low-risk box of the matrix. The other three quadrants aren’t places where companies should rush to find uses for generative AI tools.
1 . Ethan Mollick,“ChatGPT 是人工智能的引爆点”,hbr.org,2022 年 12 月 14 日, https://
1. Ethan Mollick, “ChatGPT Is a Tipping Point for AI,” hbr.org, December 14, 2022, https://
改编自 hbr.org 上发布的内容,2023 年 3 月 29 日(产品#H07J5S)。
Adapted from content posted on hbr.org, March 29, 2023 (product #H07J5S).
作者:David De Cremer、Nicola Morini Bianzino 和 Ben Falk
by David De Cremer, Nicola Morini Bianzino, and Ben Falk
“创造者经济”目前的价值约为每年 140 亿美元。在新的数字渠道的支持下,独立作家、播客、艺术家和音乐家可以直接与观众联系以赚取自己的收入。 Substack、Flipboard 和 Steemit 等互联网平台使个人不仅能够创建内容的同时也成为其作品的独立制作人和品牌管理者。虽然新技术正在颠覆许多种类的工作,但这些平台为人们提供了通过人类创造力谋生的新方法。
The “creator economy” is currently valued at around $14 billion per year. Enabled by new digital channels, independent writers, podcasters, artists, and musicians can connect with audiences directly to make their own incomes. Internet platforms such as Substack, Flipboard, and Steemit enable individuals not only to create content but also to become independent producers and brand managers of their work. While many kinds of work were being disrupted by new technologies, these platforms offered people new ways to make a living through human creativity.
面对技术变革,创造力往往被认为是一种独特的人类品质,不易受到技术颠覆力量的影响,并且对未来至关重要。事实上,行为研究人员甚至将创造力称为人类的杰作。
In the face of technological change, creativity is often held up as a uniquely human quality, less vulnerable to the forces of technological disruption and critical for the future. Indeed, behavioral researchers even call the skill of creativity a human masterpiece.
然而,如今,ChatGPT 和 Midjourney 等生成式人工智能应用程序有可能颠覆这种特殊地位,并显着改变独立和受薪的创造性工作。专注于提供内容的工作——写作、创建图像、编码以及其他通常需要大量知识和信息的工作——现在似乎可能唯一受到生成式人工智能的影响。
Today, however, generative AI applications such as ChatGPT and Midjourney are threatening to upend this special status and significantly alter creative work, both independent and salaried. Jobs focused on delivering content—writing, creating images, coding, and other jobs that typically require an intensity of knowledge and information—now seem likely to be uniquely affected by generative AI.
目前尚不清楚这种影响将采取何种形式。我们针对这种发展如何展开提出了三种可能的场景,但重要的是,它们并不相互排斥。在此过程中,我们强调了风险和机遇,最后就企业今天应该采取哪些行动来为这个美丽的新世界做好准备提出了建议。
What isn’t clear yet is what shape this kind of impact will take. We propose three possible—but, importantly, not mutually exclusive—scenarios for how this development might unfold. In doing so, we highlight risks and opportunities and conclude by offering recommendations for what companies should do today to prepare for this brave new world.
如今,大多数企业都认识到采用人工智能来提高员工效率和绩效的重要性。例如,人工智能被用来提高医疗保健专业人员在高风险工作中的工作表现,在手术期间为医生提供建议,并用作癌症筛查的工具。它也被用于客户服务,这是一个风险较低的环境。机器人技术用于提高仓库的运行速度和可靠性,并降低成本。
Today, most businesses recognize the importance of adopting AI to promote the efficiency and performance of their human workforce. For example, AI is being used to augment health-care professionals’ job performance in high-stakes work, advising physicians during surgery and used as a tool in cancer screenings. It’s also being used in customer service, a lower-stakes context. And robotics is used to make warehouses run with greater speed and reliability, as well as reducing costs.
随着生成式人工智能的到来,我们在更具创造性的工作中看到了增强实验。就在 2021 年,GitHub 推出了 GitHub Copilot,这是一个帮助人类程序员的人工智能“结对程序员”。1最近,设计师、电影制作人和广告主管已开始使用 DALL-E 2 等图像生成器。这些工具不需要用户非常精通技术。事实上,大多数这些应用程序都非常易于使用,即使是具有初级语言能力的孩子现在也可以使用它们来创建内容。几乎每个人都可以利用它们。
With the arrival of generative AI, we’re seeing experiments with augmentation in more creative work. Just back in 2021, GitHub introduced GitHub Copilot, an AI “pair programmer” that aids human coders.1 More recently, designers, filmmakers, and advertising execs have started using image generators such as DALL-E 2. These tools don’t require users to be very tech savvy. In fact, most of these applications are so easy to use that even children with elementary-level verbal skills can use them to create content right now. Pretty much everyone can make use of them.
这种情况(不一定)不会对从事创造性工作的人构成威胁。人工智能不会让许多创造者失业,而是会支持人类完成他们已经完成的工作,只是让他们以更快的速度和效率完成工作。在这种情况下,生产力将会提高,因为对使用自然语言的生成式人工智能工具的依赖减少了提出新想法或文本片段所需的时间和精力。当然,人类仍然需要投入时间来纠正和编辑新生成的信息,但总体而言,创意项目应该能够更快地前进(参见第 5 章,“生成式人工智能如何增强人类创造力”)。
This scenario isn’t (necessarily) a threat to people who do creative work. Rather than putting many creators out of work, AI will support humans to do the work they already perform, simply allowing them to do it with greater speed and efficiency. In this scenario, productivity would rise as reliance on generative AI tools that use natural language reduces the time and effort required to come up with new ideas or pieces of text. Of course, humans will still have to devote time to possibly correct and edit the newly generated information, but overall, creative projects should be able to move forward more quickly (see chapter 5, “How Generative AI Can Augment Human Creativity”).
我们已经可以预见这样的未来:随着进入壁垒的降低,我们可以期待更多的人从事创造性工作。 GitHub Copilot 不会取代人类编码员,但它确实使新手编码变得更容易,因为他们可以依赖模型中嵌入的知识和大量数据,而不必从头开始学习所有内容。如果更多的人学习“提示工程”——向机器提出正确问题的技能——人工智能将能够生成非常相关且有意义的内容,而人类只需要稍微编辑一下就可以使用它。人们可以通过先进的技术向计算机发出指令,从而提高效率。语音到文本的算法,然后由像 ChatGPT 这样的人工智能来解释和执行。
We can already glimpse what such future holds: With reduced barriers to entry, we can expect many more people to engage in creative work. GitHub Copilot doesn’t replace the human coder, but it does make coding easier for novices, as they can rely on the knowledge and vast reams of data embedded within the model rather than having to learn everything from scratch. If more people learn “prompt engineering”—the skill of asking the machine the right questions—AI will be able to produce very relevant and meaningful content that humans will need to edit only somewhat before they can put it to use. This higher level of efficiency can be facilitated by having people speak instructions to a computer via advanced voice-to-text algorithms, which will then be interpreted and executed by an AI like ChatGPT.
快速、轻松地检索、上下文化和解释知识的能力可能是大型语言模型最强大的业务应用。自然语言界面与强大的人工智能算法相结合,将帮助人类更快地提出更多的想法和解决方案,随后他们可以尝试这些想法和解决方案,以揭示更多更好的创意输出。总体而言,这种情况描绘了一个更快创新的世界,其中机器增强的人类创造力将主要实现快速迭代。
The ability to quickly and easily retrieve, contextualize, and interpret knowledge may be the most powerful business application of large language models. A natural language interface combined with a powerful AI algorithm will help humans in coming up more quickly with a larger number of ideas and solutions that they subsequently can experiment with to reveal more and better creative output. Overall, this scenario paints a world of faster innovation where machine-augmented human creativity will enable mainly rapid iteration.
第二种可能的情况是,不公平的算法竞争和不充分的治理导致真正的人类创造力被挤出。在这里,人类作家、制片人和创作者被算法生成的内容海啸淹没,一些才华横溢的创作者甚至选择退出市场。如果这种情况发生,那么我们需要解决的一个重要问题是:我们将如何产生新想法?
A second possible scenario is that unfair algorithmic competition and inadequate governance leads to the crowding out of authentic human creativity. Here, human writers, producers, and creators are drowned out by a tsunami of algorithmically generated content, with some talented creators even opting out of the market. If that were to happen, then an important question that we need to address is: How will we generate new ideas?
这种场景的新版本可能已经存在。例如,最近针对知名人士的诉讼生成式人工智能平台指控大规模版权侵权。使这个问题更加令人担忧的是,知识产权法还没有跟上人工智能研究领域的技术进步。各国政府很可能会花费数十年的时间来争论如何平衡技术创新的激励措施,同时保留对真正的人类创造的激励措施——这条道路将导致人类创造力的巨大损失。
A nascent version of this scenario might already exist. For example, recent lawsuits against prominent generative AI platforms allege copyright infringement on a massive scale. What makes this issue even more fraught is that intellectual property laws have not caught up with the technological progress made in the field of AI research. It’s quite possible that governments will spend decades fighting over how to balance incentives for technical innovation while retaining incentives for authentic human creation—a route that would be a terrific loss for human creativity.
在这种情况下,生成式人工智能显着改变了创作者的激励结构,并增加了企业和社会的风险。如果廉价的生成式人工智能削弱了真实的人类内容,那么随着时间的推移,随着人类创造的新艺术和新内容越来越少,创新将面临真正放缓的风险。创作者已经在争夺人类注意力的激烈竞争,如果有无限的内容需求,这种竞争和压力只会进一步加剧。极其丰富的内容,远远超出了我们迄今为止所看到的任何数字破坏,将使我们被噪音淹没,我们需要找到新的技术和策略来管理洪流。
In this scenario, generative AI significantly changes the incentive structure for creators and raises risks for businesses and society. If cheaply made generative AI undercuts authentic human content, there’s a real risk that innovation will slow down over time as humans make less and less new art and content. Creators are already in intense competition for human attention spans, and this kind of competition—and pressure—will only rise further if there is unlimited content on demand. Extreme content abundance, far beyond what we’ve seen with any digital disruption to date, will inundate us with noise, and we’ll need to find new techniques and strategies to manage the deluge.
这种情况也可能意味着内容创建的方式发生根本性的变化。如果制作成本几乎下降到零,那么就有可能通过以前的方式接触到特定的(通常是较少包含的)受众。极度个性化和版本控制。事实上,我们预计个性化的压力会迅速上升,因为生成式人工智能具有创造越来越代表特定消费者的内容的巨大潜力。举个例子,BuzzFeed 宣布将使用 OpenAI 的工具个性化其内容,例如测验和量身定制的浪漫喜剧宣传。2
This scenario could also mean fundamental changes to what content creation looks like. If production costs fall close to nothing, that opens up the possibility of reaching specific—and often less included—audiences through extreme personalization and versioning. In fact, we expect the pressure to personalize to go up fast because generative AI carries such great potential to create content that is increasingly representative of the specific consumer. As a case in point, BuzzFeed announced it will personalize its content such as quizzes and tailor-made rom-com pitches with OpenAI’s tools.2
如果增强个性化体验的实践得到广泛应用,那么我们就有可能失去观看同一部电影、阅读同一本书和消费同一新闻的共同体验。在这种情况下,随着内容的平均质量随着真实的人类内容的份额下降,将更容易创造政治分裂的病毒式内容和大量错误/虚假信息。两者都可能会恶化过滤泡沫效应,即算法偏差会扭曲或限制个人在网上看到的内容。
If the practice of enhanced personalized experiences is applied broadly, then we run the risk of losing the shared experience of watching the same film, reading the same book, and consuming the same news. In that case, it will be easier to create politically divisive viral content and significant volumes of mis/disinformation as the average quality of content declines alongside the share of authentic human content. Both would likely worsen filter bubble effects, where algorithmic bias skews or limits what an individual sees online.
然而,即使在这种相对的反乌托邦中,人类在对该生态系统中的现有内容提出建议方面仍然发挥着重要作用。与音乐流媒体服务等其他非常大的内容市场一样,随着搜索成本的上升,策展相对于创作来说将变得更有价值。但与此同时,高昂的搜索成本将锁定现有艺术家,而牺牲新艺术家,从而导致市场集中和分裂。这将导致少数知名艺术家占据主导地位创作者的长尾市场只保留极少的市场份额。
Yet even in this relative dystopia, there remains a significant role for humans to make recommendations of existing content in this ecosystem. As in other very large content markets, like music streaming services, curation will become more valuable relative to creation as search costs rise. At the same time, however, high search costs will lock in existing artists at the expense of new ones, concentrating and bifurcating the market. This will result in a small handful of established artists dominating the market with a long tail of creators retaining minimal market share.
我们可以看到的第三种潜在情况是,针对大型科技公司的“科技抵制”重新加速,这一次的重点是算法生成的内容。被合成创意输出淹没的一个可能的影响是,人们将开始重视真实的创意而不是生成的内容,并且可能愿意为此支付溢价。虽然生成模型表现出非凡的、有时是新兴的能力,但它们在准确性方面存在问题,经常生成听起来合法但充满事实错误和错误逻辑的文本。出于显而易见的原因,人类可能会要求内容提供商提供更高的准确性,因此可能更多地依赖可信的人力资源而不是机器生成的信息。
The third potential scenario that we could see develop is one where the “techlash” against giant tech companies regains speed, this time with a focus against algorithmically generated content. One plausible effect of being inundated with synthetic creative outputs is that people will begin to value authentic creativity over generated content and may be willing to pay a premium for it. While generative models demonstrate remarkable and sometimes emergent capabilities, they suffer from problems with accuracy, frequently producing text that sounds legitimate but is riddled with factual errors and erroneous logic. For obvious reasons, humans might demand greater accuracy from their content providers and may therefore rely more on trusted human sources than on machine-generated information.
在这种情况下,人类在算法竞争中保持着竞争优势。人类创造力的独特性,包括跨国界和跨时间的社会和文化背景意识,将成为重要的杠杆。文化变化很大训练速度比生成算法更快,因此人类保持着算法无法竞争的活力。事实上,即使算法能力逐步提高,人类也可能保留实现创造力重大飞跃的能力。
In this scenario, humans maintain a competitive advantage against algorithmic competition. The uniqueness of human creativity, including awareness of social and cultural context both across borders and through time, will become important leverage. Culture changes much more quickly than generative algorithms can be trained, so humans maintain a dynamism that algorithms cannot compete against. In fact, it is likely that humans will retain the ability to make significant leaps of creativity, even if algorithmic capabilities improve incrementally.
在这种情况的发展中,政治领导层必须加强治理以应对潜在的下行风险。例如,由于信息平台充斥着虚假或误导性内容,内容审核需求可能会激增,因此必须通过人为干预和精心设计的治理框架来应对。
In the development of this scenario, it follows that political leadership will have to strengthen governance to deal with the potential downside risks. For instance, content moderation needs are likely to explode as information platforms are overwhelmed with false or misleading content, and therefore must be countered with human intervention and carefully designed governance frameworks.
创造力始终是任何公司创新过程及其竞争力的关键先决条件。不久前,创造力还是人类独有的事业。然而,正如我们所阐述的,生成式人工智能的到来即将改变这一切。为了做好准备,我们需要了解随之而来的威胁和挑战。一旦我们了解了要改变什么以及如何改变,我们就可以为未来做好准备,在未来,创意业务将成为人机的功能合作。下面,我们提供了三项建议,工作人员在采用生成式人工智能在当今的创意产业中创造商业价值和利润时应考虑这些建议。
Creativity has always been a critical prerequisite for any company’s innovation process and hence competitiveness. Not too long ago, the business of creativity was a uniquely human endeavor. However, as we’ve illustrated, the arrival of generative AI is about to change all this. To be prepared, we need to understand the accompanying threats and challenges. Once we understand what is to change and how, we can prepare for a future where the creativity business will be a function of human–machine collaborations. Below, we provide three recommendations that workers should consider as they adopt generative AI to create business value and profit in today’s creative industries.
生成式人工智能可能是自 1439 年印刷机发明以来信息生产成本结构的最大变化。随后的几个世纪里,随着获取知识和知识的成本的增加,各行各业都出现了快速创新、社会政治波动和经济混乱。信息急剧下降。我们正处于生成式人工智能革命的早期阶段。因此,我们预计不久的将来会比最近更加不稳定。
Generative AI could be the biggest change in the cost structure of information production since the creation of the printing press in 1439. The centuries that followed featured rapid innovation, sociopolitical volatility, and economic disruption across a swath of industries as the cost of acquiring knowledge and information fell precipitously. We are in the very early stages of the generative AI revolution. We expect the near future therefore to be more volatile than the recent past.
对您创造的知识进行编码、数字化和结构化将成为未来几十年的关键价值驱动力。生成式人工智能和大型语言模型使知识和技能能够在团队和业务部门之间更轻松地传播,从而加速学习和创新。
Codifying, digitizing, and structuring the knowledge you create will be a critical value driver in the decades to come. Generative AI and large language models enable knowledge and skills to transmit more easily across teams and business units, accelerating learning and innovation.
随着人工智能成为智力活动的合作伙伴,它将日益增强人类智力的有效性和创造力。因此,知识工作者需要学习如何最好地促使机器执行他们的工作。今天就开始尝试使用生成式人工智能工具来培养提示工程技能,这是未来十年创意工作者的必备技能。
As AI becomes a partner in intellectual endeavors, it will increasingly augment the effectiveness and creativity of our human intelligence. Knowledge workers therefore will need to learn how to best prompt the machine to perform their work. Get started today, experimenting with generative AI tools to develop skills in prompt engineering, a prerequisite skill for creative workers in the decade to come.
•••
• • •
随着生成式人工智能的出现,我们创造性工作的一个主要颠覆者已经出现。企业和整个世界将迫不及待地应用新兴技术来提高我们的生产力和内容生成水平。准备好投入大量时间和精力,在生成式人工智能主导的世界中掌握创造力的艺术。
With generative AI, a major disruptor of our creative work has emerged. Businesses and the world at large will be impatient to apply the new emerging technologies to boost our level of productivity and content generation. Be prepared to invest significant time and effort to master the art of creativity in a world dominated by generative AI.
与此同时,我们还需要认真考虑这些新技术对于今天成为一个有创造力的人类意味着什么,以及我们希望赋予人类真实性在艺术和内容中的作用有多大的重要性。换句话说,随着生成式人工智能成为我们工作存在的最前沿,我们与创造力的关系将如何?是?爱因斯坦说过,创造力是智力的乐趣。因此,创造性工作也能为人类的生活带来意义和情感。
At the same time, we also need to seriously consider what these new technologies mean for being a creative human today and how much importance we wish to assign to the role of human authenticity in art and content. In other words, with generative AI at the forefront of our work existence, what will our relationship with creativity be? It was Einstein who said that creativity is intelligence having fun. Creative work is thus also something that brings meaning and emotion to the lives of humans.
从这个角度来看,企业和社会将负责决定最终有多少创造性工作将由人工智能完成,又有多少由人类完成。当我们继续将生成式人工智能融入到我们的日常工作中时,找到平衡点将是一个重要的挑战。
From that perspective, businesses and society will be responsible to decide how much of the creative work will ultimately be done by AI and how much by humans. Finding the balance here will be an important challenge when we move ahead with integrating generative AI in our daily work existence.
要点
TAKEAWAYS
通过内容创作的自动化和定制化,生成式人工智能有可能改变创作过程。使用生成式人工智能的应用程序(包括 ChatGPT 和 Midjourney)正在激增,并对所有类型的创意工作构成威胁。
Through the automation and customization of content creation, generative AI has the potential to transform the creative process. Applications that use generative AI, including ChatGPT and Midjourney, are proliferating and pose a threat to all types of creative work.
✓ 由于生成式人工智能对创造力的影响,可能会出现三种情况:人工智能辅助创新的爆炸式增长、机器对创造力的垄断或对人类生产内容的重视。
✓ There are three scenarios that could occur because of generative AI’s impact on creativity: an explosion of AI-assisted innovation, the monopolization of creativity by machines, or a premium placed on human-produced content.
✓ 个人和企业应该做好应对颠覆的准备,投资于知识本体,并轻松地与人工智能对话。
✓ Individuals and businesses should be ready for disruption, invest in knowledge ontologies, and become comfortable speaking with AI.
✓ 将生成式人工智能融入创意工作时,我们必须考虑我们希望与人类创造力的持续关系。
✓ When incorporating generative AI into creative work, we must consider what we want our continuing relationship with human creativity to be.
1 . Nat Friedman,“GitHub Copilot 简介:您的 AI 配对程序员”,GitHub 博客,2021 年 6 月 29 日, https://
1. Nat Friedman, “Introducing GitHub Copilot: Your AI Pair Programmer,” GitHub blog, June 29, 2021, https://
2 . James Vincent,“BuzzFeed 表示将使用 OpenAI 的人工智能工具来个性化其内容”,The Verge,2023 年 1 月 21 日, https://
2. James Vincent, “BuzzFeed Says It Will Use AI Tools from OpenAI to Personalize Its Content,” The Verge, January 21, 2023, https://
改编自 hbr.org 上发布的内容,2023 年 4 月 13 日(产品#H07LIA)。
Adapted from content posted on hbr.org, April 13, 2023 (product #H07LIA).
作者:Tojin T. Eapen、Daniel J. Finkenstadt、Josh Folk 和 Lokesh Venkataswamy
by Tojin T. Eapen, Daniel J. Finkenstadt, Josh Folk, and Lokesh Venkataswamy
人们对生成式人工智能在许多工作中取代人类的潜力感到非常担忧。但生成式人工智能为企业和政府提供的最大机遇之一是增强人类创造力并克服创新民主化的挑战。
There is tremendous apprehension about the potential of generative AI to replace people in many jobs. But one of the biggest opportunities generative AI offers to businesses and governments is to augment human creativity and overcome the challenges of democratizing innovation.
民主化创新一词是由麻省理工学院的埃里克·冯·希佩尔 (Eric von Hippel) 创造的,他自 20 世纪 70 年代中期以来一直是研究并撰写有关产品和服务用户自行开发所需产品的潜力的文章,而不是简单地依赖公司来这样做。在过去二十年左右的时间里,让用户深度参与创新过程的概念已经兴起,如今公司利用众包和创新竞赛来产生大量新想法。然而,由于四个挑战,许多企业难以利用这些贡献。
The term democratizing innovation was coined by MIT’s Eric von Hippel, who, since the mid-1970s, has been researching and writing about the potential for users of products and services to develop what they need themselves rather than simply relying on companies to do so. In the past two decades or so, the notion of deeply involving users in the innovation process has taken off, and today companies use crowdsourcing and innovation contests to generate a multitude of new ideas. However, many enterprises struggle to capitalize on these contributions because of four challenges.
首先,创新民主化的努力可能会导致评估过载。例如,众包可能会产生大量的想法,其中许多想法最终被抛弃或忽视,因为公司没有有效的方法来评估它们或合并不完整或次要的想法,而这些想法可能被证明是有效的。
First, efforts to democratize innovation may result in evaluation overload. Crowdsourcing, for instance, may produce a flood of ideas, many of which end up being dumped or disregarded because companies have no efficient way to evaluate them or merge incomplete or minor ideas that could prove potent in combination.
其次,公司可能会陷入专业知识的诅咒。最擅长产生和识别可行想法的领域专家常常难以产生甚至接受新颖的想法。
Second, companies may fall prey to the curse of expertise. Domain experts who are best at generating and identifying feasible ideas often struggle with generating or even accepting novel ideas.
第三,缺乏领域专业知识的人可能会发现新颖的想法,但可能无法提供使这些想法可行的细节。他们无法将杂乱的想法转化为连贯的设计。
Third, people who lack domain expertise may identify novel ideas but may be unable to provide the details that would make the ideas feasible. They can’t translate messy ideas into coherent designs.
最后,公司很难只见树木,只见森林。组织专注于综合一系列客户的需求,但很难制定出能够吸引整个社区的全面解决方案。
And finally, companies have trouble seeing the forest for the trees. Organizations focus on synthesizing a host of customer requirements but struggle to produce a comprehensive solution that will appeal to the community at large.
我们的研究以及与公司、学术机构、政府和军队在数百项创新工作中合作的经验(有些使用或不使用生成式人工智能)已经证明,这项技术可以帮助组织克服这些挑战。它可以增强员工和客户的创造力,帮助他们产生和识别新颖的想法,并提高原始想法的质量。我们观察到了以下五种方式。
Our research and our experience working with companies, academic institutions, governments, and militaries on hundreds of innovation efforts—some with and some without the use of generative AI—have demonstrated that this technology can help organizations overcome these challenges. It can augment the creativity of employees and customers and help them generate and identify novel ideas—and improve the quality of raw ideas. We have observed the following five ways.
生成式人工智能可以通过在远程概念之间建立关联并产生从中汲取的想法来支持发散性思维。下面是我们如何使用 Midjourney(一种文本到图像算法,可以检测图像之间的类比相似性)根据人类的文本提示生成新颖的产品设计的示例。 (我们在本文中使用了 Midjourney、ChatGPT 和 Stable Diffusion 作为示例,但它们只是现在可用的众多生成式 AI 工具中的几个。)我们要求 Midjourney 创建一个图像,将大象和蝴蝶结合在一起,产生了我们称之为“幻蝇”的嵌合体。
Generative AI can support divergent thinking by making associations among remote concepts and producing ideas drawn from them. Here’s an example of how we used Midjourney, a text-to-image algorithm that can detect analogical resemblances between images, to generate novel product designs based on textual prompts from a human. (We used Midjourney, ChatGPT, and Stable Diffusion for the examples in this article, but they are just a few of a host of generative AI tools that are now available.) We asked Midjourney to create an image that combined an elephant and a butterfly, and it produced the chimera we dubbed “phantafly.”
然后,我们使用 Midjourney 中的详细渲染来激发稳定扩散(另一种流行的文本到图像模型)中的提示。 Stable Diffusion 为不同的产品类别产生了一系列创意,包括椅子和手工巧克力糖果(见图5-1和5-2)。
We then used the detailed rendering from Midjourney to inspire prompts in Stable Diffusion, another popular text-to-image model. Stable Diffusion generated a range of ideas for different product categories, including chairs and artisanal chocolate candies (see figures 5-1 and 5-2).
以这种方式快速且廉价地生产大量设计使公司能够评估广泛的设计快速了解一系列产品概念。例如,一家服装公司使用生成式人工智能来创造新的 T 恤设计,可以紧跟潮流,并为客户提供不断变化的产品选择。
Rapidly and inexpensively producing a plethora of designs in this way allows a company to evaluate a wide range of product concepts quickly. For example, a clothing company that uses generative AI to create new designs for T-shirts could stay on top of trends and offer a constantly changing selection of products to customers.
图 5-1
FIGURE 5-1
Stable Diffusion 的 Phantafly 风格椅子概念
Phantafly-inspired chair concepts by Stable Diffusion
图5-2
FIGURE 5-2
Stable Diffusion 以 Phantafly 为灵感的手工巧克力概念
Phantafly-inspired artisanal chocolate concepts by Stable Diffusion
考虑另一个例子,说明这项技术如何将想法联系起来,创造出个人或团队可能永远不会想出的概念。我们使用了 ChatGPT,一种生成式人工智能,称为一个大的语言模型,来指导想法的产生。我们要求它通过三联过程产生想法——连接三个不同的实体(双联创造力技术的延伸)。我们的团队给了 ChatGPT 以下提示:“你将扮演一个创意者的角色。您将随机生成 10 个常见名词。然后,您将随机选择 10 个名词中的任意两个。然后你会问我第三个名词。你将通过组合或关联你确定的两个名词和我确定的名词来产生一个商业想法。”
Consider another example of how this technology can connect ideas to create concepts that an individual or a team might never have come up with themselves. We used ChatGPT, a type of generative AI known as a large language model, to guide the production of ideas. We asked it to generate ideas through a process of trisociation—connecting three distinct entities (an extension of the bisociation creativity technique). Our team gave ChatGPT the following prompt: “You will play the role of an ideator. You will randomly generate 10 common nouns. You will then randomly select any two of the 10 nouns. You will then ask me for a third noun. You will generate a business idea by combining or associating the two nouns you identified and the noun I identified.”
ChatGPT 生成了名词food和technology。当出现提示时,我们提供了额外的名词car。 ChatGPT 在短时间内提出了以下商业构想:“一种智能送餐服务,使用自动驾驶汽车将餐食运送给顾客。技术方面可能涉及使用人工智能来优化配送路线、实时跟踪食品温度以及为客户提供订单状态的实时更新。该服务可以针对忙碌的专业人士和家庭,他们希望在不牺牲口味和质量的情况下选择方便、健康的膳食。”
ChatGPT generated the nouns food and technology. When prompted, we provided the additional noun car. ChatGPT produced the following business idea in short order: “A smart food-delivery service that uses self-driving cars to transport meals to customers. The technology aspect could involve using AI to optimize delivery routes, track food temperature in real time, and provide customers with real-time updates on the status of their orders. The service could target busy professionals and families who want convenient and healthy meal options without sacrificing taste and quality.”
在另一轮中,ChatGPT 生成了名词Airlines和chair。当提示时,我们提供了University,ChatGPT 提出了一个商业理念,为学生提供一种便捷、经济高效的方式和学者可以前往世界各地参加会议和研讨会,并在飞行期间访问教育书籍图书馆。它提议将公司命名为 Fly and Study 或 Edu-Fly。
In a separate round, ChatGPT produced the nouns airline and chair. When prompted, we provided university, and ChatGPT came up with a business concept that provides a convenient, cost-effective way for students and academics to travel to conferences and workshops around the world along with access to a library of educational books during the flight. It proposed that the company be called Fly and Study or Edu-Fly.
在新产品开发的早期阶段,生成式人工智能创建的非典型设计可以激发设计师超越他们对产品在形式和功能方面的可能性或期望的先入之见。这种方法可以带来人类使用传统方法可能从未想象过的解决方案,在传统方法中,首先确定功能,然后设计形式来适应它们。这些输入可以帮助克服偏见,例如设计固定(过度依赖标准设计形式)、功能固定(缺乏想象传统用途之外的用途的能力)以及个体效应(个人以前的经验阻碍他们考虑)解决问题的新方法。
During the early stages of new-product development, atypical designs created by generative AI can inspire designers to think beyond their preconceptions of what is possible or desirable in a product in terms of both form and function. This approach can lead to solutions that humans might never have imagined using a traditional approach, where the functions are determined first and the form is then designed to accommodate them. These inputs can help overcome biases such as design fixation (an overreliance on standard design forms), functional fixedness (a lack of ability to imagine a use beyond the traditional one), and the Einstellung effect, where individuals’ previous experiences impede them from considering new ways to solve problems.
这是此过程的一个示例。我们要求 Stable Diffusion 生成受螃蟹启发的玩具的通用设计,但没有提供任何功能规格。然后我们在看到之后想象了功能能力设计。例如,图5-3所示的螃蟹玩具系列中,左上角的图像可以开发成爬墙玩具;旁边的图像可能是一个可以发射小球穿过房间的玩具。靠近中心的盘子里的螃蟹可以成为宠物的慢食盘。
Here’s an example of this process. We asked Stable Diffusion to generate generic designs of crab-inspired toys but provided it with no functional specifications. Then we imagined functional capabilities after seeing the designs. For instance, in the collection of crab-inspired toys shown in figure 5-3, the image in the top left could be developed into a wall-climbing toy; the image next to it could be a toy that launches a small ball across a room. The crab on a plate near the center could become a slow-feeder dish for pets.
这并不是一种完全新颖的方式来推出不寻常的产品:大部分的建筑和骑行迪士尼世界等主题公园的功能是由再现故事中的场景和人物的愿望驱动的。但生成式人工智能工具可以帮助启动公司的富有想象力的设计。
This is not a completely novel way to come up with unusual products: Much of the architecture and ride functionality in theme parks such as Disney World has been driven by a desire to recreate scenes and characters from a story. But generative AI tools can help jump-start a company’s imaginative designs.
图 5-3
FIGURE 5-3
Stable Diffusion 以螃蟹为灵感的玩具概念
Crab-inspired toy concepts by Stable Diffusion
生成式人工智能工具可以在创新前端的其他方面提供帮助,包括提高想法的特异性、评估想法以及有时将它们组合起来。考虑一个创新挑战,其目标是找出尽量减少食物浪费的方法。 ChatGPT 评估了三个原始想法的优缺点:(1) 具有动态有效期的包装(标签根据储存地点的环境条件自动更改日期或颜色); (2) 帮助用户捐赠食物的应用程序; (3) 开展宣传活动,教育人们了解保质期的类型以及它们在新鲜度和适用性方面所代表的意义。 ChatGPT 对利弊进行了平衡的分析,这反映了我们对两个感兴趣的人之间讨论这些想法的优点的交流所期望的结果。
Generative AI tools can assist in other aspects of the front end of innovation, including by increasing the specificity of ideas and by evaluating ideas and sometimes combining them. Consider an innovation challenge where the goal is to identify ways to minimize food waste. ChatGPT assessed the pros and cons of three raw ideas: (1) packaging with dynamic expiration dates (labels that automatically change either the dates or colors based on the environmental conditions in the places where they are stored); (2) an app to help users donate food; and (3) a campaign to educate people on types of expiration dates and what they represent in terms of freshness and fitness for use. ChatGPT produced a balanced analysis of the pros and cons that mirrored what we might expect from an exchange between two interested persons discussing the merits of such ideas.
例如,当 ChatGPT 评估动态过期日期打包的概念时,它确定它将帮助消费者更好地了解产品的保质期,并鼓励食品制造商生产小批量的产品,以便在杂货店货架上更频繁地补充。此外,ChatGPT 指出,动态有效期可能需要对制造和包装过程进行重大改变,因此可能会增加制造商和消费者的成本。
When ChatGPT evaluated the concept of dynamic expiration-date packaging, for instance, it determined that it would help consumers better understand the shelf life of products and encourage food manufacturers to produce smaller batches that would be replenished more frequently on grocery shelves. In addition, ChatGPT pointed out that dynamic expiration dates might require significant changes to the manufacturing and packaging process and as a result, could increase the costs to both manufacturers and consumers.
ChatGPT 确定,这款食物捐赠应用程序可以鼓励人们在食物变质之前用完,并通过向有需要的人提供未开封的可食用食物来减少食物浪费。它警告说,该应用程序可能需要庞大的用户群才能发挥作用,而且来自各种不受监管的来源的食品运输和分配可能会带来安全问题。
ChatGPT determined that the food-donation app could encourage people to use up their food before it goes bad and reduce food waste by giving unopened, edible food to those in need. It cautioned that the app could require a large user base to be effective and that the transportation and distribution of food from a wide variety of unregulated sources could pose safety concerns.
它表示,针对消费者的教育计划的优点是提高消费者对不同过期标签含义的认识,并帮助他们就食品购买和浪费做出更明智的决定。但 ChatGPT 警告称,该教育计划可能过于复杂,因为所有食品的保质期并未标准化。它还警告说,如果该计划范围广泛,特别是当它涉及广泛的活动或教育材料时,对用户进行不同类型的到期日期的教育可能会代价高昂。
It stated that the pros of an education program for consumers were increasing consumer awareness of the meaning of different expiration labels and helping them make more-informed decisions about food purchases and waste. But ChatGPT warned that this education program could be overly complex because expiration dates are not standardized across all food products. And it cautioned that educating users on different types of expiration dates can be costly if the program is broad in scope, particularly if it involves widespread campaigns or educational materials.
生成式人工智能可以超越简单的利弊,帮助人类评估创造力的维度,例如新颖性、可行性、特异性、影响力和可操作性。我们要求 ChatGPT 使用这些标准评估相同的示例。
Generative AI can go beyond simple pros and cons and help humans evaluate dimensions of creativity such as novelty, feasibility, specificity, impact, and workability. We asked ChatGPT to assess the same examples using these criteria.
以下是其对动态有效期包装概念的评估:
Here is its assessment of the dynamic expiration-date packaging concept:
以下是 ChatGPT 对捐赠即将过期食物的应用程序的评估:
Here is ChatGPT’s assessment of the app for donating food close to expiration:
最后,以下是 ChatGPT 如何评估教育用户了解不同类型的到期日期的想法:
Finally, here is how ChatGPT evaluated the idea to educate users on different types of expiration dates:
使用 ChatGPT 的评估,可以相对容易地评估这三个概念,或者根据最重要的标准快速评分或组织它们。
Using ChatGPT’s assessments, it would be relatively easy to evaluate these three concepts, or quickly score or organize them based on the criteria that matter most.
生成式人工智能工具可以解决创意竞赛中面临的一个重要挑战:组合或合并大量创意以产生更强大的创意。例如,试图赢得合同或拨款来开发减少食物浪费计划的组织可以使用这些工具来结合前面讨论的三个概念。我们分两步完成此操作。首先,我们要求 ChatGPT 充实每个概念:
Generative AI tools can solve an important challenge faced in idea contests: combining or merging a large number of ideas to produce much stronger ones. For example, an organization that is trying to win a contract or a grant to develop a food-waste-reduction program can use the tools to combine the three concepts discussed earlier. We did this in two steps. First, we asked ChatGPT to flesh out each of the concepts:
接下来,我们要求 ChatGPT 将所有三个想法合并到一个食物垃圾计划中。它简短地描述了以下内容:“全面的减少食物浪费的解决方案包括创建具有动态有效期的智能包装、建立捐赠剩余食物的计划以及教育公众了解食物有效期的重要性。智能包装帮助消费者就食品新鲜度做出明智的决定,捐赠计划确保剩余食物得到充分利用,教育提高人们对食物浪费影响的认识。该解决方案有潜力通过赋予消费者权力、与行业和政府合作以及更好地利用剩余食物来显着减少食物浪费并创造更可持续的未来。”
Next, we asked ChatGPT to merge all three ideas into one food-waste program. In short order it produced the following description: “The comprehensive food-waste-reduction solution involves creating smart packaging with dynamic expiration dates, establishing a program for donating surplus food, and educating the public on the importance of understanding food expiration dates. The smart packaging helps consumers make informed decisions about food freshness, the donation program ensures surplus food is put to good use, and education raises awareness about the impact of food waste. This solution has the potential to significantly reduce food waste and create a more sustainable future by empowering consumers, working with industry and government, and making better use of surplus food.”
在开发新产品创意或设计时,生成式人工智能可以促进公司设计师和预期产品用户之间以及用户之间的协作。换句话说,该技术使新产品的共同创造变得更加容易且成本更低。例如,企业可以为用户提供生成设计的工具,然后根据客户的喜好创建个性化的产品版本。或者,用户的设计可以发布在众包平台上,由社区对其进行评估。然后,其他合作者可以进一步开发顶级设计。
When developing new product ideas or designs, generative AI can facilitate collaborations between a company’s designers and users of a prospective product and among users themselves. In other words, the technology makes co-creation of new offerings much easier and less expensive. For example, a business can give users access to tools to generate designs and then create a personalized version of the product based on the customer’s preferences. Alternatively, users’ designs could be posted on a crowdsourcing platform where they are evaluated by the community. The top designs could then be further developed by additional collaborators.
为了说明其潜力,我们展示了如何设计飞行汽车(人们已经尝试开发这种汽车 100 多年,但没有取得多大成功)。我们给稳定扩散这个提示:“设计一款既可以飞行又可以在路上行驶的产品,即飞行汽车。”稳定扩散产生了几种设计,我们选择了我们认为最有前途的一种:图 5-4右下角的车辆。
To illustrate the potential, we show how a flying car—something people have been trying to develop for more than 100 years without much success—might be designed. We gave Stable Diffusion this prompt: “Design a product that can fly but also drive on the road, a flying automobile.” Stable Diffusion generated several designs, and we selected what we considered to be the most promising one: the vehicle in the lower right corner of figure 5-4.
然后我们要求 Stable Diffusion 采用该设计并重新构想这个概念,使汽车“像一只机器鹰”。图 5-5显示了生成式人工智能程序很快就产生了——从左上角最像机器鹰的设计到右下角更可行的飞行汽车概念。
Then we asked Stable Diffusion to take that design and reimagine the concept so that the car “resembles a robot eagle.” Figure 5-5 shows the variations that the generative AI program quickly produced—from the top left design that looks most like a robot eagle to the more feasible concept of a flying automobile in the lower right corner.
图 5-4
FIGURE 5-4
Stable Diffusion 的飞行汽车概念
Stable Diffusion’s concepts of a flying automobile
图5-5
FIGURE 5-5
Stable Diffusion 的类似机器鹰的飞行汽车概念
Stable Diffusion’s concepts of a flying automobile that resembles a robot eagle
第二个示例说明了设计人员如何使用此类工具就结构设计的主题变化进行协作。他们从人工智能生成的飞行汽车设计开始,并要求该工具生产类似蜻蜓、老虎、乌龟和鹰的版本(见图5-6)。
A second example illustrates how designers can use such tools to collaborate on thematic variations of a structural design. They began with a flying-automobile design generated by AI and asked the tool to produce versions that resembled a dragonfly, a tiger, a tortoise, and an eagle (see figure 5-6).
另一种方法是人类协作者使用 ChatGPT 等工具来开发产品的细节,然后使用 Stable Diffusion 等工具根据一系列相互构建的提示获得视觉设计。我们给 ChatGPT 的提示与我们给 Stable Diffusion 的提示类似:“描述一种既可以飞行又可以在路上行驶的产品,即飞行汽车。”
An alternative approach is for human collaborators to use a tool like ChatGPT to develop details of the product and then use one like Stable Diffusion to obtain visual designs based on a series of prompts that build on one another. We gave ChatGPT a similar prompt to what we had given to Stable Diffusion: “Describe a product that can fly but also drive on the road, a flying automobile.”
图5-6
FIGURE 5-6
人工智能生成的飞行汽车设计,形状类似于蜻蜓、老虎、乌龟和鹰
AI-generated designs of a flying car that resemble a dragonfly, a tiger, a tortoise, and an eagle
ChatGPT 提供了这样的描述:“飞行汽车是一款时尚且具有未来感的车辆,专为终极冒险而打造。它具有时尚的外观跑车拥有平滑的曲线和抛光的外观,但带有隐藏的转子,使其能够飞行。”当我们对稳定扩散进行描述时,它提供了如图 5-7所示的图像。
ChatGPT provided this description: “The flying automobile is a sleek and futuristic vehicle that is built for the ultimate adventure. It has the appearance of a stylish sports car with smooth curves and polished exterior but with hidden rotors that allow it to take flight.” When we gave that description to Stable Diffusion, it provided the image shown in figure 5-7.
图 5-7
FIGURE 5-7
使用稳定扩散从 ChatGPT 描述生成设计的飞行汽车设计
Flying automobile design using Stable Diffusion to generate a design from a ChatGPT description
接下来,我们要求 ChatGPT 重新构思描述,以包含产品必须类似于蜻蜓并具有夜间飞行的照明标记的信息。其回复如下:“凭借其细长的车身、伸展的机翼和隐藏的旋翼,该飞行器让人想起一只栩栩如生的蜻蜓。这沿着机翼和车身的发光标记创造了令人惊叹的视觉效果,有助于使车辆在黑暗中可见。”稳定扩散将这种描述翻译成各种版本,保持了可行的设计,并根据蜻蜓翅膀的图案添加了照明元素。图 5-8中的图像是示例。
Next we asked ChatGPT to reimagine the description to include the information that the product must resemble a dragonfly and have illumination markers for flying at night. It came back with the following: “With its slender body, extended wings, and hidden rotors, the vehicle is reminiscent of a dragonfly come to life. The illuminated markers located along the wings and body create a stunning visual effect, helping to make the vehicle visible in the darkness.” Stable Diffusion translated that description into various versions that maintained the feasible design and added elements of illumination based on the pattern of a dragonfly’s wings. The images in figure 5-8 are examples.
图5-8
FIGURE 5-8
融入蜻蜓细节和照明的设计变体
Variations on the design that incorporate dragonfly details and illumination
•••
• • •
人类拥有无限的创造力。然而,以书面或视觉形式传达概念的挑战限制了很多人贡献新想法。生成式人工智能可以消除这个障碍。与任何真正的创新能力一样,毫无疑问会遇到阻力。长期存在的创新流程必须改变。在旧的做事方式中拥有既得利益的人——尤其是那些担心自己会被淘汰的人——将会抵制。但其优势——有机会大幅增加组织内部和外部想法的数量和新颖性——将使这一旅程变得值得。生成式人工智能的最大潜力不是取代人类,而是取代人类。它旨在帮助人类通过个人和集体努力创造迄今为止难以想象的解决方案。它可以真正实现创新民主化。
Humans have boundless creativity. However, the challenge of communicating their concepts in written or visual form restricts vast numbers of people from contributing new ideas. Generative AI can remove this obstacle. As with any truly innovative capability, there will undoubtedly be resistance to it. Long-standing innovation processes will have to change. People with vested interests in the old way of doing things—especially those worried about being rendered obsolete—will resist. But the advantages—the opportunities to dramatically increase the number and novelty of ideas from both inside and outside the organization—will make the journey worthwhile. Generative AI’s greatest potential is not replacing humans; it is to assist humans in their individual and collective efforts to create hitherto unimaginable solutions. It can truly democratize innovation.
要点
TAKEAWAYS
生成式人工智能具有增强人类创造力的潜力。它使设计师能够从多个角度研究概念,发散性思考,超越他们的视野自己的假设,并使用数据驱动的见解来质疑这些假设。
Generative AI has the potential to augment human creativity. It enables designers to investigate concepts from several perspectives, think divergently, see beyond their own assumptions, and use data-driven insights to question those assumptions.
✓ 人工智能可以帮助解决与创造力相关的问题,例如评估过载、专业知识偏差、细节不足以及难以理解大局。
✓ AI can help solve creativity-related problems like assessment overload, expertise bias, insufficient details, and trouble understanding the bigger picture.
✓ 生成式人工智能可以通过评估新概念和现有未开发概念的组合来支持想法的检查和改进。
✓ Generative AI can support the examination and improvement of ideas by evaluating fresh concepts and combinations of already-existing undeveloped concepts.
✓ 这些技术鼓励用户参与新产品的共同开发。
✓ These technologies encourage user participation in the codevelopment of new products.
改编自 《哈佛商业评论》2023 年 7 月至 8 月的一篇文章(产品 #R2304C)。
Adapted from an article in Harvard Business Review, July–August 2023 (product #R2304C).
作者:Prabhakant Sinha、Arun Shastri 和 Sally E. Lorimer
by Prabhakant Sinha, Arun Shastri, and Sally E. Lorimer
2023年初,微软推出了 Viva Sales,这是一款嵌入生成式 AI 技术的应用程序,旨在帮助销售人员和销售经理起草量身定制的客户电子邮件、获取有关客户和潜在客户的见解,并生成建议和提醒。几周后,Salesforce(该公司)随后推出了 Einstein GPT。
Early in 2023, Microsoft fired a powerful salvo by launching Viva Sales, an application with embedded generative AI technology designed to help salespeople and sales managers draft tailored customer emails, get insights about customers and prospects, and generate recommendations and reminders. A few weeks later, Salesforce (the company) followed by launching Einstein GPT.
销售由于其非结构化、高度可变、以人为本的方法,在利用方面落后于财务、物流和营销等职能数字技术。但现在,销售部门已准备好迅速成为生成式人工智能的领先采用者。人工智能驱动的系统正在成为每个销售人员(以及每个销售经理)不可或缺的数字助理。
Sales, with its unstructured, highly variable, people-driven approach, has lagged behind functions such as finance, logistics, and marketing when it comes to utilizing digital technologies. But now, sales are primed to quickly become a leading adopter of generative AI. AI-powered systems are on the way to becoming every salesperson’s (and every sales manager’s) indispensable digital assistant.
销售非常适合生成式人工智能模型的功能。销售是互动和交易密集型的,会产生大量数据,包括电子邮件链中的文本、电话交谈的音频以及个人互动的视频。这些正是模型设计用于处理的非结构化数据类型。销售的创造性和有机性为生成式人工智能的解释、学习、链接和定制创造了巨大的机会。
Sales is well suited to the capabilities of generative AI models. Selling is interaction- and transaction-intensive, producing large volumes of data, including text from email chains, audio of phone conversations, and video of personal interactions. These are exactly the types of unstructured data the models are designed to work with. The creative and organic nature of selling creates immense opportunities for generative AI to interpret, learn, link, and customize.
但如果生成式人工智能要发挥其潜力,还需要克服障碍和挑战。它必须以非侵入式方式嵌入到销售流程和运营中,以便销售团队可以自然地将这些功能集成到他们的工作流程中。生成式人工智能有时会得出错误、有偏见或不一致的结论。尽管可公开访问的模型很有价值(像我们这样的数亿用户已经使用 ChatGPT 来查询几乎每个主题的知识库),但当模型在公司特定的数据和上下文。这可能成本高昂,并且需要稀缺的专业知识,包括具有丰富的人工智能和销售知识的人员。那么,销售组织如何才能收获价值而不浪费精力走上非生产性的道路呢?
But there are hurdles and challenges to overcome if generative AI is to realize its potential. It must be nonintrusively embedded into sales processes and operations so that sales teams can naturally integrate the capabilities into their workflow. Generative AI sometimes draws wrong, biased, or inconsistent conclusions. Although the publicly accessible models are valuable (hundreds of millions of users like us have already used ChatGPT to query the knowledge base on practically every topic), the true power for sales teams comes when models are customized and fine-tuned on company-specific data and contexts. This can be expensive and requires scarce expertise, including people with significant knowledge of AI and sales. So how can sales organizations harvest the value without wasting energy on heading down unproductive pathways?
在解决如何做之前,请考虑生成式人工智能可以为销售组织做什么。
Before addressing the how, consider what generative AI can do for sales organizations.
随着时间的推移,我们接触到的几乎每个销售组织都受到行政工作逐渐增加的困扰。随着销售复杂性的增加,对文档、批准和合规报告的需求也随之增加。不知不觉中,越来越多地使用销售技术也是一个重要因素。新技术通常会带来更多的培训、更多的数据输入和更多的报告供阅读。生成式人工智能可以扭转行政蠕变;例如,通过帮助销售人员撰写电子邮件、回复提案请求、整理笔记以及自动更新 CRM 数据。
Almost every sales organization we touch is cursed with the gradual increase of administrative work over time. As selling complexity grows, so does the need for documentation, approvals, and compliance reporting. Unwittingly, the increasing use of sales technology is also a large factor. New technologies often lead to more training, more data entry, and more reports to peruse. Generative AI can reverse administrative creep; for example, by helping salespeople write emails, respond to proposal requests, organize notes, and automatically update CRM data.
人工智能在销售中的应用最近取得了进展。我们帮助许多公司部署人工智能驱动的系统,推荐个性化的内容和产品,以及销售人员与客户联系的最佳渠道。推荐基于有关客户和类似客户的偏好和行为的数据,以及过去与客户的互动。销售人员接受或拒绝建议,并可以评价其质量以改进算法。
The use of AI in sales has been progressing of late. We have helped many companies deploy AI-powered systems that recommend personalized content and product offers, along with the best channel for salespeople to use to connect with customers. Recommendations are based on data about the preferences and behaviors of the customer and similar customers, as well as past interactions with the customer. Salespeople accept or reject the recommendations and can rate their quality to improve the algorithms.
通过分层生成人工智能,模型可以产生更好的建议。一个例子是考虑从语言的细微差别以及客户兴趣或不信任的微妙信号中收集的客户情绪——在电子邮件、与销售人员的对话、社交媒体网站上的帖子等中。此外,销售人员可以与系统协作,实时改进推荐。例如,在收到向客户提供新产品的建议后,销售人员可以更深入地挖掘客户自己的需求,并横向寻找可能从同一产品中受益的其他客户。交互式会话用户界面使应用程序复制使用方便。在真正协作的卖家-买家环境中,即使是买家也可以参与对话。
By layering on generative AI, the models can produce better recommendations. One example would be considering customer sentiments gleaned from the nuances of language and subtle signals of customer interest or distrust—in emails, conversations with salespeople, posts on social media sites, and more. Further, the salesperson can collaborate with the system to improve recommendations in real time. For example, after receiving a suggestion to approach a customer with a new offering, the salesperson can dig deeper—both vertically into the customer’s own needs and horizontally to find other customers who might benefit from the same offering. An interactive, conversational user interface makes the application easy to use. In a truly collaborative seller–buyer environment, even the buyer can be part of the dialog.
销售经理花费大量时间研究销售业绩报告和分析。最近,大多数销售报告已从被动的、向后看的文档发展为具有深入分析功能的更具交互性的诊断工具。借助生成式人工智能,报告系统可以变得更加强大和具有前瞻性。经理可以提出问题来获得见解,以帮助销售人员提高并提供更有针对性和更具激励性的指导反馈。需要花费数周时间的销售计划任务可以在一个小时内完成,因为经理与系统对话以发现机会,制定大客户策略,并确定如何将工作量分配给地区、客户、产品和活动。
Sales managers spend a lot of time studying reports and analytics on sales performance. Recently, most sales reports have progressed from passive, backward-looking documents to more interactive diagnostics tools with drill-down capabilities. With generative AI, reporting systems can become even more powerful and forward-looking. Managers can pose questions to get insights for helping salespeople improve and for delivering more pointed and more motivational coaching feedback. Sales planning tasks that took weeks can be performed in an hour as managers dialogue with the system to discover opportunities, formulate key account strategies, and determine how to allocate effort to geographies, customers, products, and activities.
生成式人工智能相对较新,并且发展迅速。缺乏确定其角色、培训的人才和微调模型,以及开发和实施应用程序。人们必须找到一种途径来防范虚假挑战,快速实现价值,并在控制成本的同时交付成果。
Generative AI is relatively new and evolving rapidly. There is a shortage of talent for defining its role, training and fine-tuning models, and developing and implementing applications. One must find pathways that guard against falsehood challenges, realize value quickly, and deliver results while keeping costs under control.
ChatGPT 及其竞争对手有时确实会给出不准确的答案或得出错误的推论。你问同一个问题两次,你会得到不同的答案。用户必须知道何时以及如何使用此类技术。他们必须从高而现实的期望开始。提出问题并提供连续的提示来改进答案是一门艺术。销售组织必须通过培训、学徒和最佳实践分享来学习这一点。
ChatGPT and its competitors do sometimes give inaccurate answers or draw the wrong inferences. You ask the same question twice and you get different answers. Users must know when and how to use such technologies. They must start with high but realistic expectations. There is an art to asking questions and providing successive prompts to improve the answer. Sales organizations must learn this through training, apprenticeship, and best-practice sharing.
当这些模型根据公司背景的知识进行微调时,风险就会降低。通过添加数据、培训和反馈,准确性和一致性得到提高(就像人一样!)。人工智能在危险环境中生成的答案必须由人员审核。幸运的是,人工审核是销售人员和销售经理工作流程的自然组成部分。
The risk is lower when these models are fine-tuned on knowledge from the company’s context. Through added data, training, and feedback, accuracy and consistency improve (just like with people!). AI-generated answers in risky contexts must be reviewed by a person. Fortunately, human review is a natural part of salespeople’s and sales managers’ workflow.
随着这种颠覆性技术的力量呈指数级增长,有可能在几周而不是几个月内开始实现价值。快速取得成果的一种策略是将功能集成到现有的销售系统中。例如,生成式人工智能可以改进销售人员用来撰写电子邮件或开发销售演示和提案的工具。它还可以通过整合有关客户情绪的见解来提高人工智能生成的建议的质量。此类增强功能可以在后台进行,因此用户无需重新学习应用程序功能即可受益。就实施速度而言,“购买”胜过“构建”。尽管构建定制的人工智能系统提供了更大的灵活性,但这样做既耗时又占用资源。购买现有应用程序可以减少对专业内部人才的需求,并且可以更轻松地跟上快速变化的技术。
As the power of this disruptive technology grows exponentially, it’s possible to start realizing value in weeks, not months. One strategy for quick results is to integrate capabilities into existing sales systems. For example, generative AI can improve the tools salespeople use to write emails or develop sales presentations and proposals. It can also boost the quality of AI-generated suggestions by incorporating insights about customer sentiments. Such enhancements can happen in the background, so users benefit without needing to relearn application features. When it comes to speed of implementation, “buy” trumps “build.” Although building a custom AI-powered system offers greater flexibility, doing so is time-consuming and resource-intensive. Buying an existing application reduces the need for specialized in-house talent and makes it easier to keep up with fast-changing technology.
在开发一小部分内部人工智能专家核心来支持的同时,外包能力通常是有意义的销售以及其他职能。当将人工智能引入销售的努力由“边界扳手”领导时,即理解并受到技术专家和销售人员尊重的个人,成功的几率会更大。通过使用两种语言,边界跨越者可以帮助明智地定制解决方案,使它们不仅对销售有用,而且随着时间的推移也可以实施和可持续。此外,敏捷、迭代的实施方法可以使努力始终走在实现价值的道路上,同时鼓励持续改进。关键步骤包括根据早期经验团队(一组提供有关系统可用性、价值和实施计划的见解的主要用户)反馈的快速原型设计、测试和迭代。
It often makes sense to outsource capabilities while developing a small core of internal AI experts who support sales as well as other functions. The odds of success are greater when efforts to bring AI to sales are led by a “boundary spanner”—an individual who understands and is respected by technical experts as well as by sales force members. By speaking both languages, a boundary spanner can help judiciously tailor solutions so they are usable and useful for sales but also implementable and sustainable over time. Further, an agile, iterative approach to implementation keeps efforts on the path to value while encouraging continuous improvement. Key steps include rapid prototyping, testing, and iteration based on feedback from an early-experience team—a group of lead users who provide insights about system usability, value, and implementation plans.
我们预计生成式人工智能将为几乎每个销售人员和销售经理提供数字助理。这些工具已经在帮助文案撰写者起草内容和计算机程序员编写代码,将他们的工作效率提高 50% 或更多。它对销售人员也能起到同样的作用。
We expect generative AI is to power digital assistants for nearly every salesperson and sales manager. These tools are already helping copywriters draft content and computer programmers write code, boosting their productivity by 50% or more. It can do the same for salespeople.
人工智能已经让客户自助服务变得更加强大,内部销售也变得更加强大。消费者是越来越多地使用数字技术自行研究产品和服务。
AI is already making customer self-service more powerful, and inside sales more potent. Consumers are increasingly using digital technology to research products and services on their own.
电子商务也在 B2B 领域蓬勃发展。即使在复杂的销售中,数字化也发挥着越来越重要的作用,承担着潜在客户开发和优先级排序、产品信息共享和配置以及订单下达等任务。数字化和内部销售不可避免地继续接管现场销售人员过去所做的许多任务,特别是对于熟悉的采购。
E-commerce has taken off in the B2B world too. Even in complex sales, digital plays an increasing role, taking on tasks such as lead generation and prioritization, product information sharing and configuring, and order placement. Inexorably, digital and inside sales continue to take over many tasks that field salespeople used to do, especially for familiar purchases.
然而,新的复杂产品仍然需要销售人员能够识别感知和潜在需求、定制解决方案并驾驭复杂的采购组织。是的,人工智能将夺走销售人员的任务,并在复杂的情况下进一步缩小他们的角色。与此同时,销售人工智能技术的公司将组建庞大的销售队伍,以抓住迫在眉睫的巨大而复杂的机会。
However, new and complex offerings still require salespeople who can identify perceived and latent needs, tailor solutions, and navigate complex buying organizations. Yes, AI will take tasks away from salespeople and narrow their role even more on complex situations. At the same time, the companies that sell AI technologies will create large sales forces to capture the looming massive and complex opportunities.
要点
TAKEAWAYS
生成式人工智能可以让销售代表和经理腾出时间专注于更多增值活动,从而实现销售转型。当生成式人工智能适当时集成到销售流程中,肯定会提高生产力。
Generative AI can transform sales by freeing up time for sales representatives and managers to focus on more value-adding activities. When generative AI is properly integrated into sales processes, it is certain to increase productivity.
✓ 这些工具可以帮助制定重要的客户策略、扭转管理蔓延、提供个性化内容和产品、回复提案请求电子邮件、与客户合作等等。
✓ These tools can assist in developing important account strategies, reversing administrative creep, providing personalized content and product offers, responding to proposal request emails, working with customers, and more.
✓ 在整合这些技术的过程中,销售人员和销售经理需要制定策略来处理不一致和不准确的问题,快速实现价值,并在控制成本的同时交付成果。
✓ On the pathway to integrating these technologies, salespeople and sales managers will need strategies for dealing with inconsistency and inaccuracy, realizing value rapidly, and delivering results while controlling costs.
✓ 人工智能正在迅速成为必要的数字助理,但对于复杂的产品,知识渊博的销售人员仍然是必要的。
✓ AI is quickly becoming a necessary digital assistant, but for complex products, knowledgeable salespeople are and will continue to be needed.
改编自 hbr.org 上发布的内容,2023 年 3 月 31 日(产品#H07JGX)。
Adapted from content posted on hbr.org, March 31, 2023 (product #H07JGX).
作者:吉尔·阿佩尔、朱莉安娜·尼尔鲍尔和大卫·A·施韦德尔
by Gil Appel, Juliana Neelbauer, and David A. Schweidel
G生成式人工智能看起来就像魔法一样。 Stable Diffusion、Midjourney 或 DALL-E 2 等图像生成器可以产生从旧照片和水彩到铅笔画和点画风格的非凡视觉效果。由此产生的产品可能会令人着迷——与人类的平均表现相比,创作的质量和速度都得到了提高。纽约现代艺术博物馆举办了一个由人工智能根据博物馆自己的藏品生成的装置,海牙莫瑞泰斯皇家美术馆悬挂了维米尔《戴珍珠耳环的少女》的人工智能变体,而原作则被借出。
Generative AI can seem like magic. Image generators such as Stable Diffusion, Midjourney, or DALL-E 2 can produce remarkable visuals in styles from aged photographs and watercolors to pencil drawings and pointillism. The resulting products can be fascinating—both quality and speed of creation are elevated compared with average human performance. The Museum of Modern Art in New York hosted an installation that was AI-generated from the museum’s own collection, and the Mauritshuis in The Hague hung an AI variant of Vermeer’s Girl with a Pearl Earring while the original was away on loan.
文本生成器的能力也许更加引人注目,因为它们可以撰写论文、诗歌和摘要,并且证明它们能够熟练地模仿风格和形式(尽管它们可以根据事实进行创造性许可)。
The capabilities of text generators are perhaps even more striking as they write essays, poems, and summaries and are proving adept mimics of style and form (though they can take creative license with facts).
虽然这些新的人工智能工具似乎可以从以太中变出新材料,但事实并非如此。生成式人工智能平台在数据湖和问题片段上进行训练,这些参数是由处理大量图像和文本的软件构建的数十亿个参数。人工智能平台恢复模式和关系,然后用它们来创建规则,然后在响应提示时做出判断和预测。
While it may seem like these new AI tools can conjure new material from the ether, that’s not quite the case. Generative AI platforms are trained on data lakes and question snippets—billions of parameters that are constructed by software processing huge archives of images and text. The AI platforms recover patterns and relationships, which they then use to create rules and then make judgments and predictions when responding to a prompt.
这个过程伴随着法律风险,包括知识产权(IP)侵权。在许多情况下,它还提出了仍在解决的法律问题。例如,版权、专利或商标侵权是否适用于人工智能创作?是否清楚谁拥有生成式人工智能平台为您或您的客户创建的内容?在企业能够享受生成式人工智能的好处之前,他们需要了解风险以及如何保护自己。
This process comes with legal risks, including intellectual property (IP) infringement. In many cases, it also poses legal questions that are still being resolved. For example, does copyright, patent, or trademark infringement apply to AI creations? Is it clear who owns the content that generative AI platforms create for you or your customers? Before businesses can embrace the benefits of generative AI, they need to understand the risks—and how to protect themselves.
尽管生成式人工智能对于市场来说可能是新事物,但现有法律对其使用具有重大影响。法院正在整理如何应用成文法律。存在侵权和使用权问题、人工智能生成作品所有权的不确定性、训练数据中未经许可的内容以及用户是否应该能够通过直接引用其他创作者的版权和商标作品来提示这些工具的问题。未经他们允许的名字。
Though generative AI may be new to the market, existing laws have significant implications for its use. Courts are sorting out how the laws on the books should be applied. There are infringement and right-of-use issues, uncertainty about ownership of AI-generated works, and questions about unlicensed content in training data and whether users should be able to prompt these tools with direct reference to other creators’ copyrighted and trademarked works by name without their permission.
这些索赔已经被提起诉讼。在 2022 年底提起的一起案件中,Andersen 诉 Stability AI 等人。之后,三名艺术家组成集体起诉多个生成式 AI 平台,理由是 AI 在未经许可的情况下使用他们的原创作品,以他们的风格训练他们的 AI。因此,这些平台允许用户生成可能不足以对艺术家现有受保护作品进行改造的作品,因此将成为未经授权的衍生作品。如果法院发现人工智能的作品未经授权且具有衍生性,则可能会受到严厉的侵权处罚。
These claims are already being litigated. In a case filed in late 2022, Andersen v. Stability AI et al., three artists formed a class to sue multiple generative AI platforms on the grounds that the AI was using their original works without license to train their AI in their styles. The platforms were thus allowing users to generate works that might be insufficiently transformative from the artists’ existing protected works and, as a result, would be unauthorized derivative works. If a court finds that the AI’s works are unauthorized and derivative, substantial infringement penalties can apply.
2023 年提起的类似案件声称,公司使用包含数千甚至数百万未经许可作品的数据湖来训练人工智能工具。图像许可服务机构 Getty 对 Stable Diffusion 的创作者提起诉讼,指控其照片使用不当,侵犯了其带水印照片集的版权和商标权。
Similar cases filed in 2023 bring claims that companies trained AI tools using data lakes with thousands—or even many millions—of unlicensed works. Getty, an image licensing service, filed a lawsuit against the creators of Stable Diffusion alleging the improper use of its photos, violating both copyright and trademark rights it has in its watermarked photograph collection.
在每一个案件中,法律体系都被要求澄清知识产权法下“衍生作品”的范围,并且根据管辖权的不同,不同的联邦巡回法院可能会做出不同的解释。这些案件的结果预计将取决于对合理使用原则的解释,该原则允许未经所有者许可而使用受版权保护的作品“用于批评(包括讽刺)、评论、新闻报道、教学(包括多份副本)等目的用于课堂使用)、学术或研究”,以及以非预期方式对受版权保护的材料进行变革性使用。
In each of these cases, the legal system is being asked to clarify the bounds of what is a “derivative work” under intellectual property laws—and depending on the jurisdiction, different federal circuit courts may respond with different interpretations. The outcome of these cases is expected to hinge on the interpretation of the fair use doctrine, which allows copyrighted work to be used without the owner’s permission “for purposes such as criticism (including satire), comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research,” and for a transformative use of the copyrighted material in a manner for which it was not intended.
这并不是技术和版权法第一次发生冲突。谷歌成功地在一场诉讼中为自己辩护,认为变革性使用允许从书籍中抓取文本来创建其搜索引擎,目前,这一决定仍然是先例。
This isn’t the first time technology and copyright law have crashed into each other. Google successfully defended itself against a lawsuit by arguing that transformative use allowed for the scraping of text from books to create its search engine, and for the time being, this decision remains precedential.
但还有其他非技术案例可能会影响生成式人工智能产品的处理方式。 2023 年,美国最高法院向安迪·沃霍尔基金会提起诉讼,由摄影师林恩·戈德史密斯 (Lynn Goldsmith) 提起,该人获得了已故音乐家普林斯 (Prince) 的一张照片的许可,这可能会完善美国版权法,解决一件艺术品何时与其他艺术品有足够不同的问题。其源材料是否具有明确的“变革性”,以及法院在评估这种变革时是否可以考虑衍生作品的含义。法院裁定沃霍尔的作品不属于合理使用,这可能会给人工智能生成的作品带来麻烦。
But there are other, nontechnological cases that could shape how the products of generative AI are treated. A 2023 case before the U.S. Supreme Court against the Andy Warhol Foundation—brought by photographer Lynn Goldsmith, who had licensed an image of the late musician, Prince—may refine U.S. copyright law on the issue of when a piece of art is sufficiently different from its source material to become unequivocally “transformative” and whether a court can consider the meaning of the derivative work when it evaluates that transformation. The court’s finding that the Warhol piece is not a fair use could mean trouble for AI-generated works.
所有这些不确定性给使用生成式人工智能的公司带来了一系列挑战。对于供应商和客户对生成式人工智能的使用保持沉默的合同,存在直接或无意侵权的风险。如果企业用户意识到训练数据可能包含未经许可的作品,或者人工智能可以生成不属于合理使用范围的未经授权的衍生作品,那么企业可能会陷入故意侵权的境地,其中每次侵权的损失可能高达 150,000 美元。知用。通过将数据输入生成人工智能工具,还存在意外共享机密商业秘密或商业信息的风险。
All this uncertainty presents a slew of challenges for companies that use generative AI. There are risks regarding infringement—direct or unintentional—in contracts that are silent on generative AI usage by their vendors and customers. If a business user is aware that training data might include unlicensed works or that an AI can generate unauthorized derivative works not covered by fair use, a business could be on the hook for willful infringement, which can include damages up to $150,000 for each instance of knowing use. There’s also the risk of accidentally sharing confidential trade secrets or business information by inputting data into generative AI tools.
这种新范式意味着公司需要采取新措施来保护自己的短期和长期利益。
This new paradigm means that companies need to take new steps to protect themselves for both the short and long term.
一方面,人工智能开发人员应确保他们在获取用于训练模型的数据时遵守法律。这应该涉及对那些拥有开发人员寻求添加到训练数据中的知识产权的个人进行许可和补偿,无论是通过许可还是分享人工智能工具产生的收入。人工智能工具的客户应询问提供商他们的模型是否接受过任何受保护内容的训练,查看服务条款和隐私政策,并避免使用无法确认其训练数据是否已获得内容创建者的适当许可或受开源约束的生成式人工智能工具人工智能公司遵守的许可证。
AI developers, for one, should ensure that they are in compliance with the law in regard to their acquisition of data being used to train their models. This should involve licensing and compensating those individuals who own the IP that developers seek to add to their training data, whether by licensing it or sharing in revenue generated by the AI tool. Customers of AI tools should ask providers whether their models were trained with any protected content, review the terms of service and privacy policies, and avoid generative AI tools that cannot confirm that their training data is properly licensed from content creators or subject to open-source licenses with which the AI companies comply.
从长远来看,人工智能开发人员需要在获取数据的方式上采取主动,而投资者需要了解数据的来源。稳定的扩散,Midjourney 和其他人基于 LAION-5B 数据集创建了他们的模型,该数据集包含近 60 亿张通过不加区别地抓取网络而编译的标记图像,并且已知包含大量受版权保护的作品。
In the long run, AI developers will need to take initiative about the ways they source their data—and investors need to know the origin of the data. Stable Diffusion, Midjourney, and others have created their models based on the LAION-5B dataset, which contains almost 6 billion tagged images compiled from scraping the web indiscriminately and is known to include a substantial number of copyrighted creations.
开发 Stable Diffusion 的 Stability AI 宣布,艺术家将能够选择退出下一代图像生成器。但这让内容创作者有责任积极保护他们的知识产权,而不是要求人工智能开发人员在使用之前将知识产权保护到作品中——即使艺术家选择退出,这一决定也只会在下一次迭代中得到体现。该平台。相反,公司应该要求创作者选择加入而不是选择退出。
Stability AI, which developed Stable Diffusion, has announced that artists will be able to opt out of the next generation of the image generator. But this puts the onus on content creators to actively protect their IP, rather than requiring the AI developers to secure the IP to the work prior to using it—and even when artists opt out, that decision will be reflected only in the next iteration of the platform. Instead, companies should require the creator’s opt-in rather than opt-out.
开发人员还应该研究如何维护人工智能生成内容的来源,这将提高培训数据中包含的作品的透明度。这将包括记录用于开发内容的平台、所使用的设置的详细信息、种子数据元数据的跟踪以及促进人工智能报告的标签,包括生成种子和用于创建内容的特定提示。此类信息不仅可以复制图像,轻松验证其真实性,而且还可以表达用户的意图,从而保护商业用户可能需要克服知识产权侵权索赔,并证明输出并非出于故意复制或窃取。
Developers should also work on ways to maintain the provenance of AI-generated content, which would increase transparency about the works included in the training data. This would include recording the platform that was used to develop the content, details on the settings that were employed, tracking of seed data’s metadata, and tags to facilitate AI reporting, including the generative seed and the specific prompt that was used to create the content. Such information would not only allow for the reproduction of the image, allowing its veracity to be verified easily, but it would also speak to the user’s intent, thereby protecting business users that might need to overcome intellectual property infringement claims as well as demonstrate that the output was not due to willful intent to copy or steal.
开发这些审计跟踪将确保公司在客户开始在合同中提出要求时做好准备,以确保供应商的作品不会在未经授权的情况下故意或无意地衍生。展望未来,保险公司可能需要这些报告,以便将传统保险范围扩大到资产包括人工智能生成作品的企业用户。分解训练数据中包含的个别艺术家的贡献以生成图像将进一步支持对贡献者进行适当补偿的努力,甚至将原始艺术家的版权嵌入到新的创作中。
Developing these audit trails would assure that companies are prepared if (or, more likely, when) customers start including demands for them in contracts as a form of insurance that the vendor’s works aren’t willfully, or unintentionally, derivative without authorization. Looking further into the future, insurance companies may require these reports in order to extend traditional insurance coverages to business users whose assets include AI-generated works. Breaking down the contributions of individual artists who were included in the training data to produce an image would further support efforts to appropriately compensate contributors, and even embed the copyright of the original artist in the new creation.
个人内容创作者和创作内容的品牌都应采取措施检查其知识产权组合的风险并保护这些组合。这涉及主动在已编译的数据集或大型数据湖中寻找他们的作品,包括徽标和艺术品以及文本等视觉元素图形元素,例如图像标签。显然,这无法通过 TB 或 PB 的内容数据手动完成,但现有的搜索工具应该允许以经济高效的方式自动化执行此任务。新工具甚至可以混淆创作者的作品,使其不被纳入这些算法中。
Both individual content creators and brands that create content should take steps to examine risks to their intellectual property portfolios and protect those portfolios. This involves proactively looking for their work in compiled datasets or large-scale data lakes, including visual elements such as logos and artwork as well as textual elements such as image tags. Obviously, this could not be done manually through terabytes or petabytes of content data, but existing search tools should allow the cost-effective automation of this task. New tools can even promise to obfuscate creators’ works from being ingested into these algorithms.
内容创作者应积极监控数字和社交渠道,以发现可能源自其自己的作品的出现。对于拥有需要保护的有价值商标的品牌来说,这不仅仅是寻找耐克 Swoosh 或蒂芙尼蓝等特定元素的问题。相反,可能需要对商标和商业外观(产品的总体外观,包括其设计和包装)进行监控,以检查衍生作品的风格,这些衍生作品可能是由于接受了特定主题的培训而产生的。一组特定的品牌形象。尽管标志或特定颜色等关键元素可能不会出现在人工智能生成的图像中,但其他风格元素可能表明品牌内容的显着元素被用来制作衍生作品。这种相似性可能表明意图通过使用可识别的视觉或听觉元素来获取普通消费者对品牌的好感。模仿可能被视为最真诚的奉承形式,但它也可能暗示对品牌的故意滥用。
Content creators should actively monitor digital and social channels for the appearance of works that may be derived from their own. For brands with valuable trademarks to protect, it’s not simply a matter of looking for specific elements such as the Nike Swoosh or Tiffany Blue. Rather, there may be a need for trademark and trade dress (the general appearance of a product, including both its design and its packaging) monitoring to evolve in order to examine the style of derivative works, which may have arisen from being trained on a specific set of a brand’s images. Even though critical elements such as a logo or specific color may not be present in an AI-generated image, other stylistic elements may suggest that salient elements of a brand’s content were used to produce a derivative work. Such similarities may suggest the intent to appropriate the average consumer’s goodwill for the brand by using recognizable visual or auditory elements. Mimicry may be seen as the sincerest form of flattery, but it can also suggest the purposeful misuse of a brand.
对于企业主来说,关于商标侵权的好消息是,商标律师对于如何通知侵权者并对其实施商标权拥有完善的协议,例如通过发送措辞强硬的停止通知或许可要求函,或者移动直接提出商标侵权索赔,无论是人工智能平台还是人类生成了未经授权的品牌。
The good news regarding trademark infringement for business owners is that trademark attorneys have well-established protocols for how to notify and enforce trademark rights against an infringer, such as by sending a strongly worded cease-and-desist notice or licensing demand letter, or moving directly to filing a trademark infringement claim, regardless of whether an AI platform or a human generated the unauthorized branding.
企业应评估其交易条款,将保护措施写入合同。首先,他们应该要求生成式人工智能平台提供服务条款,以确认为其人工智能提供训练数据的适当许可。他们还应该要求对由于人工智能公司未能正确许可数据输入或人工智能本身未能自我报告其输出以标记潜在侵权而造成的潜在知识产权侵权提供广泛的赔偿。
Businesses should evaluate their transaction terms to write protections into contracts. As a starting point, they should demand terms of service from generative AI platforms that confirm proper licensure of the training data that feeds their AI. They should also demand broad indemnification for potential intellectual property infringement caused by a failure of the AI companies to properly license data input or self-reporting by the AI itself of its outputs to flag for potential infringement.
至少,如果任何一方使用生成式人工智能,企业应在其供应商和客户协议(针对定制服务和产品交付)中添加披露信息,以确保知识产权双方都能够理解并受到保护。他们还应披露各方将如何支持这些作品的作者身份和所有权登记。供应商和客户合同中可以在保密条款中加入人工智能相关语言,禁止接收方将信息披露方的机密信息输入到人工智能工具的文本提示中。
At a minimum, if either party is using generative AI, businesses should add disclosures in their vendor and customer agreements (for custom services and products delivery) to ensure that intellectual property rights are understood and protected on both sides of the table. They should also disclose how each party will support registration of authorship and ownership of those works. Vendor and customer contracts can include AI-related language added to confidentiality provisions to bar receiving parties from inputting confidential information of the information-disclosing parties into text prompts of AI tools.
为了减少意外使用风险,一些领先公司为客户创建了合同修改生成人工智能清单,评估每个条款对人工智能的影响。使用生成式人工智能的组织或与使用生成式人工智能的供应商合作的组织应让其法律顾问及时了解该用途的范围和性质,因为法律将继续快速发展。
To reduce unintended risks of use, some leading firms have created generative AI checklists for contract modifications for their clients that assess each clause for AI implications. Organizations that use generative AI, or work with vendors that do, should keep their legal counsel abreast of the scope and nature of that use as the law will continue to evolve rapidly.
•••
• • •
展望未来,拥有足够的自有知识产权库可供借鉴的内容创作者可能会考虑构建自己的数据集来训练和成熟人工智能平台。由此产生的生成人工智能模型不需要从头开始训练,而是可以建立在使用合法来源内容的开源生成人工智能的基础上。这将使内容创建者能够通过审核以与自己的作品相同的风格制作内容追踪他们自己的数据湖,或者向在人工智能训练数据及其输出中拥有明确所有权的感兴趣的各方许可使用此类工具。本着同样的精神,已经发展了在线追随者的内容创建者可能会考虑与追随者共同创作作为获取培训数据的另一种方式,认识到应该征求这些共同创作者的许可才能使用其内容随着法律变化而更新的服务和隐私政策。
Going forward, content creators that have a sufficient library of their own intellectual property on which to draw may consider building their own datasets to train and mature AI platforms. The resulting generative AI models need not be trained from scratch but can build on open-source generative AI that has used lawfully sourced content. This would enable content creators to produce content in the same style as their own work with an audit trail to their own data lake or to license the use of such tools to interested parties with cleared title in both the AI’s training data and its outputs. In this same spirit, content creators who have developed an online following may consider co-creation with followers as another means by which to source training data, recognizing that these co-creators should be asked for their permission to make use of their content in terms of service and privacy policies that are updated as the law changes.
生成式人工智能将改变内容创作的本质,使许多人能够完成迄今为止只有少数人拥有高速完成的技能或先进技术的事情。随着这项新兴技术的发展,用户必须尊重那些支持其创造的人的权利——那些可能会被它取代的内容创作者。虽然我们了解生成式人工智能对创意阶层成员生计的真正威胁,但它也给那些使用视觉效果精心塑造自己形象的品牌带来了风险。与此同时,创意人员和企业利益都有巨大的机会来构建他们的作品和品牌材料的组合,对它们进行元标记,并训练他们自己的生成人工智能平台,该平台可以生成授权的、专有的(付费或特许使用费) )商品作为即时收入来源。
Generative AI will change the nature of content creation, enabling many to do what, until now, only a few had the skills or advanced technology to accomplish at high speed. As this burgeoning technology develops, users must respect the rights of those who have enabled its creation—those very content creators who may be displaced by it. And while we understand the real threat of generative AI to be part of the livelihood of members of the creative class, it also poses a risk to brands that have used visuals to meticulously craft their identity. At the same time, both creatives and corporate interests have a dramatic opportunity to build portfolios of their works and branded materials, meta-tag them, and train their own generative AI platforms that can produce authorized, proprietary (paid-up or royalty-bearing) goods as sources of instant revenue streams.
要点
TAKEAWAYS
生成式人工智能使用数据湖和问题片段来恢复模式和关系,在创意产业中变得越来越普遍。然而,使用生成式人工智能的法律影响仍不清楚,特别是在版权侵权、人工智能生成作品的所有权以及训练数据中未经许可的内容方面。
Generative AI, which uses data lakes and question snippets to recover patterns and relationships, is becoming more prevalent in creative industries. However, the legal implications of using generative AI are still unclear, particularly in relation to copyright infringement, ownership of AI-generated works, and unlicensed content in training data.
✓ 法院目前正在尝试确定知识产权法应如何适用于生成式人工智能,并且已经立案了几个案件。
✓ Courts are currently trying to establish how intellectual property laws should be applied to generative AI, and several cases have already been filed.
✓ 为了保护自己免受无意中违反版权法的影响,使用生成式人工智能的公司需要确保遵守法律并采取措施减轻潜在风险,例如确保使用不含未经许可内容的训练数据并开发显示方式生成内容的来源。
✓ To protect themselves from unintentionally violating copyright laws, companies that use generative AI need to ensure that they are in compliance with the law and take steps to mitigate potential risks, such as ensuring they use training data free from unlicensed content and developing ways to show provenance of generated content.
✓ 个人内容创作者和创作内容的品牌都应采取措施检查其知识产权组合的风险并保护这些资产。
✓ Both individual content creators and brands that create content should take steps to examine risks to their intellectual property portfolios and protect those assets.
改编自 hbr.org 上发布的内容,2023 年 4 月 7 日(产品#H07K15)。
Adapted from content posted on hbr.org, April 7, 2023 (product #H07K15).
作者:奥古兹·A·阿卡尔
by Oguz A. Acar
提示工程已经席卷了生成式人工智能世界。这项工作需要优化文本输入以与大型语言模型进行有效沟通,被世界经济论坛誉为“未来的工作”,而 OpenAI 首席执行官 Sam Altman 将其描述为“令人惊讶的高杠杆技能” ”。社交媒体上充斥着新一波的影响者,他们展示“神奇的提示”并承诺取得惊人的成果。
Prompt engineering has taken the generative AI world by storm. The job, which entails optimizing textual input to effectively communicate with large language models, has been hailed by the World Economic Forum as the number one “job of the future,” while OpenAI CEO Sam Altman characterized it as an “amazingly high-leveraged skill.” Social media brims with a new wave of influencers showcasing “magic prompts” and pledging amazing outcomes.
然而,尽管围绕它的讨论声不绝于耳,但由于多种原因,提示工程的重要性可能会转瞬即逝。首先,未来几代人工智能系统将变得更加直观、更善于理解自然语言,从而减少对精心设计的提示的需求。其次,像 GPT-4 这样的新人工智能语言模型已经在制作提示方面展现出了巨大的前景——人工智能本身正处于使提示工程过时的边缘。最后,提示的有效性取决于特定的算法,限制了它们在不同人工智能模型和版本中的实用性。
However, despite the buzz surrounding it, the prominence of prompt engineering may be fleeting for several reasons. First, future generations of AI systems will get more intuitive and adept at understanding natural language, reducing the need for meticulously engineered prompts. Second, new AI language models like GPT-4 already show great promise in crafting prompts—AI itself is on the verge of rendering prompt engineering obsolete. Lastly, the efficacy of prompts is contingent on the specific algorithm, limiting their utility across diverse AI models and versions.
那么,什么是更持久、更具适应性的技能,能够让我们不断利用生成式人工智能的潜力呢?它是问题表述——识别、分析和描述问题的能力。
So, what is a more enduring and adaptable skill that will keep enabling us to harness the potential of generative AI? It is problem formulation—the ability to identify, analyze, and delineate problems.
问题表述和提示工程在侧重点、核心任务和潜在能力上有所不同。提示工程的重点是通过选择适当的单词、短语、句子结构和标点符号来制作最佳的文本输入。相比之下,问题表述强调通过描绘问题的焦点、范围和边界来定义问题。提示工程需要牢牢掌握特定的人工智能工具和语言能力,而问题的表述则需要对问题领域的全面理解和提炼现实世界问题的能力。事实是,如果没有一个精心设计的问题,即使是最复杂的提示也将达不到要求。然而,一旦问题被明确定义,提示的语言学细微差别就变得与解决方案无关。
Problem formulation and prompt engineering differ in their focus, core tasks, and underlying abilities. Prompt engineering focuses on crafting the optimal textual input by selecting the appropriate words, phrases, sentence structures, and punctuation. In contrast, problem formulation emphasizes defining the problem by delineating its focus, scope, and boundaries. Prompt engineering requires a firm grasp of a specific AI tool and linguistic proficiency, while problem formulation necessitates a comprehensive understanding of the problem domain and ability to distill real-world issues. The fact is, without a well-formulated problem, even the most sophisticated prompts will fall short. However, once a problem is clearly defined, the linguistics nuances of a prompt become tangential to the solution.
不幸的是,对于我们大多数人来说,问题表述是一项被广泛忽视且未得到充分发展的技能。原因之一是过分强调解决问题而忽视了表述。这种不平衡也许可以通过流行但被误导的管理格言来最好地说明:“不要给我带来问题。给我解决方案。”因此,一项调查显示,85% 的高级管理人员认为他们的组织不善于诊断问题,这一点也就不足为奇了。1
Unfortunately, problem formulation is a widely overlooked and underdeveloped skill for most of us. One reason is the disproportionate emphasis given to problem-solving at the expense of formulation. This imbalance is perhaps best illustrated by the prevalent yet misguided management adage, “Don’t bring me problems. Bring me solutions.” It is therefore not surprising to see a survey revealing that 85% of C-suite executives consider their organizations bad at diagnosing problems.1
如何才能更好地提出问题?通过综合过去对问题制定和工作设计的研究以及我自己对众包平台的经验和研究(定期向广大受众阐述组织挑战并向广大受众开放),我确定了有效问题制定的四个关键组成部分:问题诊断、分解、重构和约束设计。
How can you get better at problem formulation? By synthesizing insights from past research on problem formulation and job design as well as my own experience and research on crowdsourcing platforms—where organizational challenges are regularly articulated and opened up to large audiences—I have identified four key components for effective problem formulation: problem diagnosis, decomposition, reframing, and constraint design.
问题诊断就是确定人工智能要解决的核心问题。换句话说,它涉及识别您希望生成式人工智能实现的主要目标。有些问题相对容易查明,例如当目标是获取特定主题的信息(例如针对员工薪酬的各种人力资源管理策略)时。其他的则更具挑战性,例如探索创新问题的解决方案。
Problem diagnosis is about identifying the core problem for AI to solve. In other words, it concerns identifying the main objective you want generative AI to accomplish. Some problems are relatively simple to pinpoint, such as when the objective is gaining information on a specific topic like various human resources management strategies for employee compensation. Others are more challenging, such as then exploring solutions to an innovation problem.
InnoCentive(现在的 Wazoku Crowd)就是一个典型的例子。该公司已帮助客户解决了 2,500 多个问题,成功率超过 80%,令人印象深刻。我对 InnoCentive 员工的采访表明,这一成功背后的关键因素是他们辨别问题背后根本问题的能力。事实上,他们经常通过使用“五个为什么”技术来区分根本原因和单纯的症状来开始制定问题。
A case in point is InnoCentive (now Wazoku Crowd). The company has helped its clients formulate more than 2,500 problems, with an impressive success rate over 80%. My interviews with InnoCentive employees revealed that a key factor behind this success was their ability to discern the fundamental issue underlying a problem. In fact, they often start their problem formulation process by using the “Five Whys” technique to distinguish the root causes from mere symptoms.
一个特殊的例子是埃克森·瓦尔迪兹灾难性漏油事件后清理亚北极水域的问题。 InnoCentive 与溢油回收研究所合作,查明了石油清理问题的根本原因是原油的粘度:冷冻油变得太稠而无法从驳船上泵出。这一诊断对于最终解决这个长达二十年之久的问题至关重要,该解决方案涉及使用改良版的建筑设备,该设备旨在振动石油,使其保持液态。
A particular instance is the problem of cleaning up subarctic waters after the catastrophic Exxon Valdez oil spill. Collaborating with the Oil Spill Recovery Institute, InnoCentive pinpointed the root cause of the oil cleanup issue as the viscosity of the crude oil: The frozen oil became too thick to pump from barges. This diagnosis was key to finally cracking the two-decade-old problem with a solution that involved using a modified version of construction equipment designed to vibrate the oil, keeping it in a liquid state.
问题分解需要将复杂的问题分解为更小的、可管理的子问题。当您处理多方面的问题时,这一点尤其重要,这些问题通常过于复杂,无法生成有用的解决方案。
Problem decomposition entails breaking down complex problems into smaller, manageable subproblems. This is particularly important when you are tackling multifaceted problems, which are often too convoluted to generate useful solutions.
以 InnoCentive 肌萎缩侧索硬化症 (ALS) 挑战为例。挑战不是寻找 ALS 治疗方法这一广泛问题的解决方案,而是集中在其中的一个子组成部分:检测和监测疾病的进展。因此,首次开发了 ALS 生物标记物,提供了一种基于测量流过肌肉组织的电流的无创且经济高效的解决方案。
Take the InnoCentive amyotrophic lateral sclerosis (ALS) challenge, for example. Rather than seeking solutions for the broad problem of discovering a treatment for ALS, the challenge concentrated on a subcomponent of it: detecting and monitoring the progress of the disease. Consequently, an ALS biomarker was developed for the first time, providing a noninvasive and cost-efficient solution based on measuring electrical current flow through muscle tissue.
我使用及时且常见的组织挑战来测试人工智能如何通过问题分解来改进:实施强大的网络安全框架。 Bing 的人工智能解决方案过于广泛和通用,无法立即发挥作用。但在将其分解为子问题(例如安全策略、漏洞评估、身份验证协议和员工培训)后,解决方案得到了显着改进。下面讨论的案例说明了这种差异。方法如因为功能分解或工作分解结构可以帮助您直观地描述复杂问题并简化对与您的组织最相关的各个组件及其互连的识别。
I tested how AI improves with problem decomposition using a timely and common organizational challenge: implementing a robust cybersecurity framework. Bing’s AI-powered solutions were too broad and generic to be immediately useful. But after breaking it down into subproblems—e.g., security policies, vulnerability assessments, authentication protocols, and employee training—the solutions improved considerably. The cases discussed below illustrate the difference. Methods such as functional decomposition or work breakdown structure can help you visually depict complex problems and simplify the identification of individual components and their interconnections that are most relevant for your organization.
问题重构涉及改变看待问题的视角,从而实现不同的解释。通过以各种方式重构问题,您可以引导人工智能扩大潜在解决方案的范围,从而帮助您找到最佳解决方案并克服创造性障碍。
Problem reframing involves changing the perspective from which a problem is viewed, enabling alternative interpretations. By reframing a problem in various ways, you can guide AI to broaden the scope of potential solutions, which can, in turn, help you find optimal solutions and overcome creative roadblocks.
以 Doug Dietz 为例,他是 GE HealthCare 的创新架构师,他的主要职责是设计最先进的 MRI 扫描仪。在一次去医院时,他看到一名惊恐的孩子正在等待 MRI 扫描,并发现 80% 的儿童需要镇静剂来应对这种令人恐惧的经历。这一发现促使他重新思考了这个问题:“我们如何才能将令人畏惧的 MRI 体验变成孩子们激动人心的冒险?”这一新鲜角度催生了 GE Adventure 系列的开发,该系列将儿科镇静率大幅降低至仅 15%,患者满意度得分提高了 90%,并提高了机器效率。
Consider Doug Dietz, an innovation architect at GE HealthCare, whose main responsibility was designing state-of-the-art MRI scanners. During a hospital visit, he saw a terrified child awaiting an MRI scan and discovered that a staggering 80% of children needed sedation to cope with the intimidating experience. This revelation prompted him to reframe the problem: “How can we turn the daunting MRI experience into an exciting adventure for kids?” This fresh angle led to the development of the GE Adventure Series, which dramatically lowered pediatric sedation rates to a mere 15%, increased patient satisfaction scores by 90%, and improved machine efficiency.
现在想象一下:员工抱怨办公楼缺乏可用停车位。最初的框架可能侧重于增加停车位,但通过从员工的角度重新审视问题(发现停车压力或通勤选择有限),您可以探索不同的解决方案。事实上,当我要求 ChatGPT 使用初始框架和替代框架生成停车位问题的解决方案时,前者产生的解决方案集中于优化停车布局或分配以及寻找新空间。后者提出了多样化的解决方案,例如促进替代交通、可持续通勤和远程工作。
Now imagine this: Employees are complaining about the lack of available parking spaces at the office building. The initial framing may focus on increasing parking space, but by reframing the problem from the employees’ perspective—finding parking stressful or having limited commuting options—you can explore different solutions. Indeed, when I asked ChatGPT to generate solutions for the parking space problem using initial and alternative frames, the former yielded solutions centered on optimizing parking layouts or allocation and finding new spaces. The latter produced a diverse solution set such as promoting alternative transportation, sustainable commuting, and remote work.
为了有效地重构问题,请考虑站在用户的角度,探索类比来表示问题,使用抽象,并主动质疑问题目标或识别问题定义中缺失的组件。
To effectively reframe problems, consider taking the perspective of users, exploring analogies to represent the problem, using abstraction, and proactively questioning problem objectives or identifying missing components in the problem definition.
问题约束设计侧重于通过定义输入、过程和过程来描绘问题的边界。解搜索的输出限制。您可以使用约束来指导 AI 生成对当前任务有价值的解决方案。当任务主要以生产力为导向时,采用具体且严格的约束来概述背景、边界和结果标准通常更合适。相比之下,对于以创造力为导向的任务,尝试施加、修改和消除约束可以探索更广阔的解决方案空间并发现新的视角。
Problem constraint design focuses on delineating the boundaries of a problem by defining input, process, and output restrictions of the solution search. You can use constraints to direct AI in generating solutions valuable for the task at hand. When the task is primarily productivity-oriented, employing specific and strict constraints to outline the context, boundaries, and outcome criteria is often more appropriate. In contrast, for creativity-oriented tasks, experimenting with imposing, modifying, and removing constraints allows exploring a wider solution space and discovering novel perspectives.
例如,品牌经理已经在使用 Lately 或 Jasper 等多种人工智能工具来大规模制作有用的社交媒体内容。为了确保这些内容与不同的媒体和品牌形象保持一致,他们通常会对长度、格式、语气或目标受众设置精确的限制。
For example, brand managers are already using several AI tools, such as Lately or Jasper, to produce useful social media content at scale. To ensure this content is aligned with different media and brand image, they are often setting precise constraints on the length, format, tone, or target audience.
然而,在寻求真正的原创性时,品牌经理可以消除格式限制或将输出限制为非常规格式。 GoFundMe 的“帮助改变一切”活动就是一个很好的例子。该公司的目标是制作年度回顾创意内容,不仅能表达对捐助者的感谢并唤起情感,而且能从典型的年终内容中脱颖而出。为了实现这一目标,它设定了非正统的限制:视觉效果将完全依赖人工智能生成的街头壁画风格的艺术,并以所有筹款活动和捐助者为特色。 DALL-E 和稳定版扩散生成单独的图像,然后将其转化为充满情感的视频。结果是:视觉上具有凝聚力和引人注目的美感,赢得了广泛的赞誉。2
When seeking true originality, however, brand managers can eliminate formatting constraints or restraining the output to an unconventional format. A great example is GoFundMe’s Help Changes Everything campaign. The company aimed to generate year-in-review creative content that would not only express gratitude to its donors and evoke emotions but also stand out from the typical year-end content. To accomplish this, it set unorthodox constraints: The visuals would rely exclusively on AI-generated street mural–style art and feature all fundraising campaigns and donors. DALL-E and Stable Diffusion generated individual images that were then transformed into an emotionally charged video. The result: a visually cohesive and striking aesthetic that garnered widespread acclaim.2
•••
• • •
总体而言,磨练问题诊断、分解、重构和约束设计方面的技能对于使人工智能结果与任务目标保持一致以及促进与人工智能系统的有效协作至关重要。
Overall, honing skills in problem diagnosis, decomposition, reframing, and constraint design is essential for aligning AI outcomes with task objectives and fostering effective collaboration with AI systems.
尽管提示工程可能在短期内成为人们关注的焦点,但其缺乏可持续性、多功能性和可转移性限制了其长期相关性。过分强调完美的文字组合甚至可能会适得其反,因为它可能会分散对问题本身的探索,并削弱用户对创作过程的控制感。相反,掌握问题的表述可能是与复杂的人工智能系统一起驾驭不确定的未来的关键。它可能被证明与计算早期学习编程语言一样重要。
Although prompt engineering may hold the spotlight in the short term, its lack of sustainability, versatility, and transferability limits its long-term relevance. Overemphasizing the crafting of the perfect combination of words can even be counterproductive, as it may detract from the exploration of the problem itself and diminish the user’s sense of control over the creative process. Instead, mastering problem formulation could be the key to navigating the uncertain future alongside sophisticated AI systems. It might prove to be as pivotal as learning programming languages was during the early days of computing.
要点
TAKEAWAYS
尽管围绕提示工程的讨论很热,但它的突出地位可能转瞬即逝。问题表述——识别、分析和描述问题的能力——将是一种更持久、适应性更强的技能,它将继续使我们能够利用生成式人工智能的潜力:
Despite the buzz surrounding prompt engineering, its prominence may be fleeting. Problem formulation—the ability to identify, analyze, and delineate problems—will be a more enduring and adaptable skill that will continue to enable us to harness the potential of generative AI:
✓ 问题表述涉及四个部分:问题诊断、分解、重构和约束设计,并且需要对问题领域有透彻的理解。
✓ Problem formulation involves four components: problem diagnosis, decomposition, reframing, and constraint design—and it necessitates a thorough understanding of the problem domain.
✓ 由于人工智能日益复杂,掌握问题表述可能变得与计算早期学习编程语言一样重要。
✓ Due to the increasing sophistication of AI, mastering problem formulation may become as important as learning programming languages was in the early days of computing.
1 . Thomas Wedell-Wedellsborg,“您解决了正确的问题吗?”,《哈佛商业评论》,2017 年 1 月至 2 月, https://
1. Thomas Wedell-Wedellsborg, “Are You Solving the Right Problems?,” Harvard Business Review, January–February 2017, https://
2 . Audrey Kemp,“美国每日广告:GoFundMe 在‘大局观’中描绘捐赠的力量”, Drum,2022 年 12 月 21 日, https://
2. Audrey Kemp, “US Ad of the Day: GoFundMe Paints the Power of Donating in ‘The Bigger Picture,’ ” Drum, December 21, 2022, https://
改编自 hbr.org 上发布的内容,2023 年 6 月 6 日(产品#H07NQK)。
Adapted from content posted on hbr.org, June 6, 2023 (product #H07NQK).
作者:齐达尔·尼利
by Tsedal Neeley
虽然组织如何(并且应该)使用人工智能的问题并不是一个新问题,但随着 ChatGPT、Midjourney 和其他生成式人工智能工具的发布,寻找答案的风险和紧迫性急剧上升。世界各地的人们都在想:我们如何使用人工智能工具来提高性能?我们可以信任吗 人工智能做出相应的决定? AI会抢走我的工作吗?
While the question of how organizations can (and should) use AI isn’t a new one, the stakes and urgency of finding answers have skyrocketed with the release of ChatGPT, Midjourney, and other generative AI tools. Everywhere, people are wondering: How can we use AI tools to boost performance? Can we trust AI to make consequential decisions? Will AI take away my job?
OpenAI、Microsoft 和 NVIDIA 引入的人工智能的力量以及市场竞争的压力使得您的组织不可避免地必须考虑机器学习、大型语言模型等的运营和道德考虑。尽管许多领导者都关注运营挑战和干扰,但道德问题至少同样(甚至更紧迫)紧迫。考虑到监管滞后于技术能力以及人工智能领域变化的速度,确保这些工具安全、合乎道德地使用的重担落在了公司身上。
The power of AI introduced by OpenAI, Microsoft, and NVIDIA—and the pressure to compete in the market—makes it inevitable that your organization will have to navigate the operational and ethical considerations of machine learning, large language models, and much more. And while many leaders are focused on operational challenges and disruptions, the ethical concerns are at least as—if not more—pressing. Given how regulation lags technological capabilities and how quickly the AI landscape is changing, the burden of ensuring that these tools are used safely and ethically falls to companies.
在我从事职业、技术和组织交叉领域的工作中,我研究了领导者如何培养数字思维以及有偏见的大型语言模型的危险。我已经确定了组织使用技术的最佳实践,并放大了相应的问题,以确保人工智能的实施符合道德规范。为了帮助您更好地确定您和您的公司应该如何考虑这些问题(毫无疑问,您应该考虑这些问题),我与 HBR 合作回答了 LinkedIn 上读者提出的八个问题。
In my work at the intersection of occupations, technology, and organizations, I’ve examined how leaders can develop digital mindsets and the dangers of biased large language models. I have identified best practices for organizations’ use of technology and amplified consequential issues that ensure that AI implementations are ethical. To help you better identify how you and your company should be thinking about these issues—and make no mistake, you should be thinking about them—I collaborated with HBR to answer eight questions posed by readers on LinkedIn.
首先,重要的是要认识到使用人工智能的最佳方式不同于我们使用其他新技术的方式。过去,大多数新工具只是让我们能够更有效地执行任务。人们用笔书写,然后是打字机(速度更快),然后是计算机(速度更快)。每个新工具都可以提高写作效率,但一般流程(起草、修改、编辑)基本保持不变。
To start, it’s important to recognize that the optimal way to work with AI is different from the way we’ve worked with other new technologies. In the past, most new tools simply enabled us to perform tasks more efficiently. People wrote with pens, then typewriters (which were faster), then computers (which were even faster). Each new tool allowed for more efficient writing, but the general processes (drafting, revising, editing) remained largely the same.
人工智能则不同。它对我们的工作和流程有更实质性的影响,因为它能够找到我们看不到的模式,然后使用它们来提供见解和分析、预测、建议,甚至完整的草稿。因此,我们不应将人工智能视为我们使用的工具,而应将其视为一组可以与之协作的系统。
AI is different. It has a more substantial influence on our work and our processes because it’s able to find patterns that we can’t see and then use them to provide insights and analysis, predictions, suggestions, and even full drafts all on its own. So instead of thinking of AI as the tools we use, we should think of it as a set of systems with which we can collaborate.
要在组织中与人工智能进行有效协作,请重点关注三件事:
To effectively collaborate with AI at your organization, focus on three things:
数字思维是态度和行为的集合,可帮助您利用数据、技术、算法和人工智能看到新的可能性。你不必成为一名程序员或数据科学家;你只需要采取一种新的、主动的方法来协作(学习跨平台工作)、计算(提出并回答正确的问题)和改变(接受它是唯一不变的)。组织中的每个人都应该努力在少数主题上达到至少 30% 的流畅度,例如系统架构、人工智能、机器学习、算法、作为队友的人工智能代理、网络安全和数据驱动的实验。1
A digital mindset is a collection of attitudes and behaviors that help you see new possibilities using data, technology, algorithms, and AI. You don’t have to become a programmer or a data scientist; you simply need to take a new and proactive approach to collaboration (learning to work across platforms), computation (asking and answering the right questions), and change (accepting that it is the only constant). Everyone in your organization should be working toward at least 30% fluency in a handful of topics, such as systems architecture, AI, machine learning, algorithms, AI agents as teammates, cybersecurity, and data-driven experimentation.1
引入新的人工智能需要员工习惯于处理新的数据和内容流,分析它们,并利用他们的发现和输出来形成不同的视角。同样,使用数据和技术为了最有效地发展,组织需要一个集成的组织结构。您的公司需要减少孤立性,并应该建立一个集中的知识和数据存储库,以实现持续的共享和协作。与人工智能竞争不仅需要结合当今的技术,还需要在精神上和结构上做好适应未来进步的准备。例如,个人已经开始将生成式人工智能(例如 ChatGPT)融入他们的日常生活中,无论公司是否准备好或愿意接受它的使用。
Bringing in new AI requires employees to get used to processing new streams of data and content, analyzing them, and using their findings and outputs to develop a different perspective. Likewise, to use data and technology most efficiently, organizations need an integrated organizational structure. Your company needs to become less siloed and should build a centralized repository of knowledge and data to enable constant sharing and collaboration. Competing with AI requires not only incorporating today’s technologies but also being mentally and structurally prepared to adapt to future advancements. For example, individuals have begun incorporating generative AI (such as ChatGPT) into their daily routines, regardless of whether companies are prepared or willing to embrace its use.
正如我的同事 Marco Iansiti 和 Karim R. Lakhani 在他们的《人工智能时代的竞争》一书中指出的那样,组织的结构反映了其内部技术系统的架构,反之亦然。如果技术系统是静态的,那么您的组织也将是静态的。但如果它们灵活,您的组织也会灵活。这一策略在亚马逊取得了成功。 Iansiti 和 Lakhani 表示,该公司在维持增长方面遇到了困难,其软件基础设施“在压力下崩溃”。所以杰夫·贝佐斯写了一份备忘录员工宣布所有团队都应通过 API 路由数据,API 允许各种类型的软件使用设定的协议进行通信和共享数据。任何不这样做的人都会被解雇。这是打破亚马逊技术系统内部惰性的尝试,而且它奏效了,打破了数据孤岛,加强了协作,并帮助建立了我们今天看到的软件和数据驱动的运营模式。虽然您可能不想诉诸类似的最后通牒,但您应该考虑人工智能的引入如何能够并且应该如何改善您的运营。
As my colleagues Marco Iansiti and Karim R. Lakhani showed in their book Competing in the Age of AI, the structure of an organization mirrors the architecture of the technological systems within it, and vice versa. If tech systems are static, your organization will be static. But if they’re flexible, your organization will be flexible. This strategy played out successfully at Amazon. The company was having trouble sustaining its growth and its software infrastructure was “cracking under pressure,” according to Iansiti and Lakhani. So Jeff Bezos wrote a memo to employees announcing that all teams should route their data through APIs, which allow various types of software to communicate and share data using set protocols. Anyone who didn’t would be fired. This was an attempt to break the inertia within Amazon’s tech systems—and it worked, dismantling data siloes, increasing collaboration, and helping to build the software- and data-driven operating model we see today. While you may not want to resort to a similar ultimatum, you should think about how the introduction of AI can—and should—change your operations for the better.
领导者需要认识到,并不总是能够了解人工智能系统如何做出决策。人工智能能够快速处理大量数据并比人类更准确或更高效地执行某些任务的一些特征也可能使其成为黑匣子:我们无法看到输出是如何产生的。然而,我们都可以通过两种方式在提高人工智能决策过程的透明度和问责制方面发挥作用:
Leaders need to recognize that it is not always possible to know how AI systems are making decisions. Some of the very characteristics that allow AI to quickly process huge amounts of data and perform certain tasks more accurately or efficiently than humans can also make it a black box: We can’t see how the output was produced. However, we can all play a role in increasing transparency and accountability in AI decision-making processes in two ways:
Callen Anthony、Beth A. Bechky 和 Anne-Laure Fayard 将隐形性和神秘性视为人工智能区别于现有技术的核心特征。2它是隐形的,因为它经常运行在其他技术或平台的后台,而用户却没有意识到;对于人们理解为人工智能的每一个 Siri 或 Alexa,都有许多技术,例如防抱死制动系统,其中包含看不见的人工智能系统。这是难以理解的,因为即使对于人工智能开发人员来说,通常也无法理解模型如何达到结果,甚至无法识别它用于达到该结果的所有数据点(好、坏或其他)。
Callen Anthony, Beth A. Bechky, and Anne-Laure Fayard identify invisibility and inscrutability as core characteristics that differentiate AI from prior technologies.2 It’s invisible because it often runs in the background of other technologies or platforms without users being aware of it; for every Siri or Alexa that people understand to be AI, there are many technologies, such as antilock brakes, that contain unseen AI systems. It’s inscrutable because, even for AI developers, it’s often impossible to understand how a model reaches an outcome, or even identify all the data points it’s using to get there—good, bad, or otherwise.
随着人工智能越来越依赖于更大的数据集,这一点变得越来越正确。考虑大型语言模型 (LLM),例如 OpenAI 的 ChatGPT 或 Microsoft 的 Bing。他们接受从互联网上抓取的大量书籍、网页和文档数据集的训练——OpenAI的法学硕士使用1750 亿个参数进行训练,旨在预测某些事情发生的可能性(一个字符、单词或一串单词、甚至是基于其先前或周围上下文的图像或用户声音的音调变化。自动更正手机上的功能就是此类预测准确性和不准确性的一个例子。但这不仅仅是训练数据的大小:许多人工智能算法也是自学习的;随着获得更多数据和用户反馈,他们不断完善自己的预测能力,并在此过程中添加新参数。
As AIs rely on progressively larger datasets, this becomes increasingly true. Consider large language models (LLMs) such as OpenAI’s ChatGPT or Microsoft’s Bing. They are trained on massive datasets of books, web pages, and documents scraped from across the internet—OpenAI’s LLM was trained using 175 billion parameters and was built to predict the likelihood that something would occur (a character, word, or string of words, or even an image or tonal shift in the user’s voice) based on either its preceding or surrounding context. The autocorrect feature on your phone is an example of the accuracy—and inaccuracy—of such predictions. But it’s not just the size of the training data: Many AI algorithms are also self-learning; they keep refining their predictive powers as they get more data and user feedback, adding new parameters along the way.
由于不可见性和不可预测性,人工智能通常具有广泛的功能——它们能够在后台工作并发现我们无法掌握的模式。目前,还没有办法深入了解人工智能工具的内部运作并保证系统产生准确或公平的输出。我们必须承认,一些不透明性是使用这些强大系统的代价。因此,领导者应该谨慎判断,确定何时以及如何适当使用人工智能,并且应该记录何时以及如何使用人工智能。这样人们就会知道人工智能驱动的决策受到了适当程度的怀疑,包括其潜在的风险或缺点。
AIs often have broad capabilities because of invisibility and inscrutability—their ability to work in the background and find patterns beyond our grasp. Currently, there is no way to peer into the inner workings of an AI tool and guarantee that the system is producing accurate or fair output. We must acknowledge that some opacity is a cost of using these powerful systems. As a consequence, leaders should exercise careful judgment in determining when and how it’s appropriate to use AI, and they should document when and how AI is being used. That way people will know that an AI-driven decision was appraised with an appropriate level of skepticism, including its potential risks or shortcomings.
麻省理工学院科学家的 2020 年研究简报指出,人工智能模型可以通过突出显示数据中有助于人工智能的特定领域等做法变得更加透明输出,构建更具可解释性的模型,并开发可用于探索不同模型如何工作的算法。3同样,领先的人工智能计算机科学家 Timnit Gebru 及其同事 Emily M. Bender、Angelina McMillan-Major 和 Margaret Mitchell(简称“Shmargaret Shmitchell”)认为,事前分析等做法会促使开发人员考虑项目风险和潜在替代方案当前计划可以提高未来技术的透明度。4与这一点相呼应的是,2023 年 3 月,著名科技企业家史蒂夫·沃兹尼亚克 (Steve Wozniak) 和埃隆·马斯克 (Elon Musk) 以及谷歌和微软的员工签署了一封信,倡导人工智能开发更加透明和可解释。
A 2020 research brief by MIT scientists notes that AI models can become more transparent through practices like highlighting specific areas in data that contribute to AI output, building models that are more interpretable, and developing algorithms that can be used to probe how a different model works.3 Similarly, leading AI computer scientist Timnit Gebru and her colleagues Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell (credited as “Shmargaret Shmitchell”) argue that practices like premortem analyses that prompt developers to consider both project risks and potential alternatives to current plans can increase transparency in future technologies.4 Echoing this point, in March 2023, prominent tech entrepreneurs Steve Wozniak and Elon Musk, along with employees of Google and Microsoft, signed a letter advocating for AI development to be more transparent and interpretable.
法学硕士存在一些严重的风险。他们能:
LLMs come with several serious risks. They can:
数据管理和文档记录是减少这些风险并确保法学硕士给出更符合(而不损害)您的品牌形象的回应的两种方法。
Data curation and documentation are two ways to curtail those risks and ensure that LLMs will give responses that are more consistent with—not harmful to—your brand image.
法学硕士通常是使用包含数十亿字的互联网数据开发的。然而,这些数据的常见来源,如 Reddit 和维基百科,缺乏足够的机制来检查准确性、公平性或适当性。考虑哪些观点在这些网站上得到体现,哪些观点被排除在外。例如,Reddit 67% 的贡献者是男性。5在维基百科上,84% 的贡献者是男性,边缘化群体的代表很少。6
LLMs are often developed using internet-based data containing billions of words. However, common sources of this data, like Reddit and Wikipedia, lack sufficient mechanisms for checking accuracy, fairness, or appropriateness. Consider which perspectives are represented on these sites and which are left out. For example, 67% of Reddit’s contributors are male.5 And on Wikipedia, 84% of contributors are male, with little representation from marginalized populations.6
相反,如果您围绕更仔细审查的来源建立法学硕士,则可以降低不适当或有害反应的风险。本德和同事建议“通过深思熟虑的过程来决定放入什么内容,而不是仅仅以规模为目标,并试图随意剔除…… ‘危险’、‘难以理解’或‘其他不良’的[数据],来管理训练数据集。” 7虽然这可能需要更多的时间和资源,但它体现了“一分预防胜于一分治疗”的格言。
If you instead build an LLM around more carefully vetted sources, you reduce the risk of inappropriate or harmful responses. Bender and colleagues recommend curating training datasets “through a thoughtful process of deciding what to put in, rather than aiming solely for scale and trying haphazardly to weed out … ‘dangerous,’ ‘unintelligible,’ or ‘otherwise bad’ [data].”7 While this might take more time and resources, it exemplifies the adage that an ounce of prevention is worth a pound of cure.
肯定会有一些组织想要利用法学硕士,但缺乏资源来使用精选数据集训练模型。在这种情况下,文档至关重要,因为它使公司能够从非专有模型的开发人员那里获取有关其使用的数据集和它们可能包含的偏差的背景信息,以及有关如何正确部署基于模型构建的软件的指导。这种做法类似于医学中使用的标准化信息,用于表明哪些研究已用于提出医疗保健建议。
There will surely be organizations that want to leverage LLMs but lack the resources to train a model with a curated dataset. In situations like this, documentation is crucial because it enables companies to get context from a nonproprietary model’s developers on which datasets it uses and the biases they may contain, as well as guidance on how software built on the model might be appropriately deployed. This practice is analogous to the standardized information used in medicine to indicate which studies have been used in making health-care recommendations.
人工智能开发人员应优先考虑文档,以便安全、透明地使用他们的模型。尝试模型的人或组织必须查找此文档以了解其风险以及它是否符合他们所需的品牌形象。
AI developers should prioritize documentation to allow for safe and transparent use of their models. And people or organizations experimenting with a model must look for this documentation to understand its risks and whether it aligns with their desired brand image.
清理数据集是一项挑战,您的组织可以通过优先考虑透明度和公平性而不是模型大小以及在数据管理中代表不同的人群来帮助克服。
Sanitizing datasets is a challenge that your organization can help overcome by prioritizing transparency and fairness over model size and by representing diverse populations in data curation.
首先,考虑您所做的权衡。科技公司一直在追求更大的人工智能系统,因为它们在某些任务上往往更有效,比如维持人类的对话。然而,如果模型太大而无法完全理解,就不可能消除潜在的偏差。为了充分消除有害偏见,开发人员必须能够理解并记录数据集固有的风险,这可能意味着使用较小的数据集。
First, consider the trade-offs you make. Tech companies have been pursuing larger AI systems because they tend to be more effective at certain tasks, like sustaining human-seeming conversations. However, if a model is too large to fully understand, it’s impossible to rid it of potential biases. To fully combat harmful bias, developers must be able to understand and document the risks inherent to a dataset, which might mean using a smaller one.
其次,如果多元化的团队(包括代表性不足的群体的成员)收集并生成用于训练模型的数据,那么您将有更好的机会确保具有不同观点和认同的人它们代表了实体。这种做法还有助于识别数据中未被识别的偏差或盲目性。
Second, if diverse teams, including members of underrepresented populations, collect and produce the data used to train models, you’ll have a better chance of ensuring that people with a variety of perspectives and identities are represented in them. This practice also helps identify unrecognized biases or blinders in the data.
人工智能只有在公平运行时才会值得信赖,而只有当我们优先考虑数据和开发团队的多样化,并清楚地记录人工智能是如何设计以实现公平时,这种情况才会发生。
AI will only be trustworthy once it works equitably, and that will happen only if we prioritize diversifying data and development teams and clearly document how AI has been designed for fairness.
使用敏感员工和客户数据的人工智能很容易受到不良行为者的攻击。为了应对这些风险,组织应该尽可能多地了解人工智能的开发方式,然后决定是否适合使用安全数据。他们还应该保持技术系统更新并指定预算资源以确保软件安全。这需要持续采取行动,因为一个小漏洞可能会让整个组织面临漏洞。
AI that uses sensitive employee and customer data is vulnerable to bad actors. To combat these risks, organizations should learn as much as they can about how their AI has been developed and then decide whether it’s appropriate to use secure data with it. They should also keep tech systems updated and earmark budget resources to keep the software secure. This requires continuous action, as a small vulnerability can leave an entire organization open to breaches.
区块链创新可以在这方面提供帮助。区块链是一种记录数据交易的安全的分布式账本,目前用于创建支付系统(更不用说加密货币)等应用程序。
Blockchain innovations can help on this front. A blockchain is a secure, distributed ledger that records data transactions, and it’s currently being used for applications like creating payment systems (not to mention cryptocurrencies).
当涉及更广泛的运营时,请考虑安大略省前信息和隐私专员 Ann Cavoukian 提出的隐私设计 (PbD) 框架,该框架建议组织遵循七项基本原则:
When it comes to your operations more broadly, consider this privacy by design (PbD) framework from former information and privacy commissioner of Ontario Ann Cavoukian, which recommends that organizations embrace seven foundational principles:
将 PbD 原则纳入您的运营需要的不仅仅是雇用隐私人员或创建隐私部门。您组织中的所有人员都需要了解客户和员工对这些问题的担忧。隐私不是事后才想到的;而是事后才想到的。它需要成为数字化运营的核心,每个人都需要努力保护它。
Incorporating PbD principles into your operation requires more than hiring privacy personnel or creating a privacy division. All the people in your organization need to be attuned to customer and employee concerns about these issues. Privacy isn’t an afterthought; it needs to be at the core of digital operations, and everyone needs to work to protect it.
即使法学硕士出现了,人工智能技术还无法执行人类可以执行的一系列令人眼花缭乱的任务,而且有很多事情它比普通人做得更糟糕。有效地使用每个新工具需要了解其目的。
Even with the advent of LLMs, AI technology is not yet capable of performing the dizzying range of tasks that humans can, and there are many things that it does worse than the average person. Using each new tool effectively requires understanding its purpose.
例如,考虑一下 ChatGPT。通过学习语言模式,它已经变得非常擅长预测哪些单词应该跟随其他单词,以至于它可以对复杂的问题产生看似复杂的文本响应。然而,这些输出的质量是有限的,因为善于猜测单词和短语的合理组合与理解材料不同。因此,ChatGPT 可以创作一首莎士比亚风格的诗歌,因为它已经了解了莎士比亚的戏剧和诗歌的特定模式,但它无法产生对他的作品中的人类状况的原始洞察。
For example, think about ChatGPT. By learning about language patterns, it has become so good at predicting which words are supposed to follow others that it can produce seemingly sophisticated text responses to complicated questions. However, there’s a limit to the quality of these outputs because being good at guessing plausible combinations of words and phrases is different from understanding the material. So ChatGPT can produce a poem in the style of Shakespeare because it has learned the particular patterns of his plays and poems, but it cannot produce the original insight into the human condition that informs his work.
相比之下,人工智能在做出预测方面可以比人类更好、更高效,因为它可以更快地处理大量数据。例子包括根据言语模式预测早期痴呆症,检测人眼无法区分的癌性肿瘤,并规划战场上更安全的路线。
By contrast, AI can be better and more efficient than humans at making predictions because it can process much larger amounts of data much more quickly. Examples include predicting early dementia from speech patterns, detecting cancerous tumors indistinguishable to the human eye, and planning safer routes through battlefields.
因此,应该鼓励员工评估人工智能的优势是否与任务相匹配,并采取相应的行动。如果您需要快速处理大量信息,它可以做到。如果你需要一堆新想法,它可以产生它们。即使您需要做出困难的决定,只要经过相关数据的培训,它也可以提供建议。
Employees should therefore be encouraged to evaluate whether AI’s strengths match up to a task and proceed accordingly. If you need to process a lot of information quickly, it can do that. If you need a bunch of new ideas, it can generate them. Even if you need to make a difficult decision, it can offer advice, providing it’s been trained on relevant data.
但你不应该在没有人类监督的情况下使用人工智能来创建有意义的工作产品。如果您需要编写大量内容非常相似的文档,人工智能可能是长期以来被称为“样板”材料的有用生成器。但请注意,它的输出来自其数据集和算法,并且它们不一定是好的或准确的。
But you shouldn’t use AI to create meaningful work products without human oversight. If you need to write a quantity of documents with very similar content, AI may be a useful generator of what has long been referred to as “boilerplate” material. But be aware that its outputs are derived from its datasets and algorithms, and they aren’t necessarily good or accurate.
每一次技术革命创造的就业机会都多于它摧毁的就业机会。汽车让马车司机失业,但却带来了建造和修理汽车、经营加油站等新的工作岗位。人工智能技术的新颖性让人很容易担心它们会取代劳动力中的人类。但我们应该将它们视为增强人类表现的方法。例如,像 Collective[i] 这样的公司开发了人工智能系统,可以分析数据以快速生成高度准确的销售预测;传统上,这项工作需要人们几天或几周的时间才能完成。但没有销售人员失业。相反,他们有更多的时间专注于工作中更重要的部分:建立关系、管理和实际销售。
Every technological revolution has created more jobs than it has destroyed. Automobiles put horse-and-buggy drivers out of business but led to new jobs building and fixing cars, running gas stations, and more. The novelty of AI technologies makes it easy to fear they will replace humans in the workforce. But we should instead view them as ways to augment human performance. For example, companies like Collective[i] have developed AI systems that analyze data to produce highly accurate sales forecasts quickly; traditionally, this work took people days or weeks to pull together. But no salespeople are losing their jobs. Rather, they’ve got more time to focus on more important parts of their work: building relationships, managing, and actually selling.
同样,OpenAI 的 Codex 等服务可以自动生成用于基本目的的编程代码。这并没有取代程序员;而是取代了程序员。它使他们能够更有效地编写代码并自动执行测试等重复任务,以便他们可以处理更高级别的问题,例如系统架构、领域建模和用户体验。
Similarly, services like OpenAI’s Codex can autogenerate programming code for basic purposes. This doesn’t replace programmers; it allows them to write code more efficiently and automate repetitive tasks like testing so that they can work on higher-level issues such as systems architecture, domain modeling, and user experience.
对就业的长期影响是复杂且不平衡的,某些行业或地区可能会出现就业被破坏和取代的时期。为了确保技术进步的好处被广泛分享,投资于教育和劳动力发展以帮助人们适应新的就业市场至关重要。
The long-term effects on jobs are complex and uneven, and there can be periods of job destruction and displacement in certain industries or regions. To ensure that the benefits of technological progress are widely shared, it is crucial to invest in education and workforce development to help people adapt to the new job market.
个人和组织应重点关注技能提升和扩展,为充分利用新技术做好准备。人工智能和机器人不会很快取代人类。更有可能的现实是,具有数字思维的人将取代那些没有数字思维的人。
Individuals and organizations should focus on upskilling and scaling to prepare to make the most of new technologies. AI and robots aren’t replacing humans anytime soon. The more likely reality is that people with digital mindsets will replace those without them.
人工智能偏见的危害已被广泛记录。 Joy Buolamwini 和 Timnit Gebru 在 2018 年的开创性论文《性别阴影》中表明,IBM 和微软等公司提供的流行面部识别技术在识别白人男性面孔方面近乎完美,但错误识别黑人女性面孔的几率高达 35%。9面部识别可用于解锁手机,但也可用于监控麦迪逊广场花园的顾客、监视抗议者以及警方调查中的嫌疑人,而错误识别会导致错误逮捕,从而扰乱人们的生活。随着人工智能的力量不断增强并越来越融入我们的日常生活,其潜在危害也呈指数级增长。以下是保护人工智能的策略。
The harms of AI bias have been widely documented. In their seminal 2018 paper “Gender Shades,” Joy Buolamwini and Timnit Gebru showed that popular facial recognition technologies offered by companies like IBM and Microsoft were nearly perfect at identifying white male faces but misidentified Black female faces as much as 35% of the time.9 Facial recognition can be used to unlock your phone but is also used to monitor patrons at Madison Square Garden, surveil protesters, and tap suspects in police investigations—and misidentification has led to wrongful arrests that can derail people’s lives. As AI grows in power and becomes more integrated into our daily lives, its potential for harm grows exponentially, too. Here are strategies to safeguard AI.
防止人工智能危害需要将我们的注意力从日益强大的人工智能的快速开发和部署转移到确保人工智能在发布前的安全。
Preventing AI harm requires shifting our focus from the rapid development and deployment of increasingly powerful AI to ensuring that AI is safe before release.
透明度也很关键。在本文前面,我解释了对人工智能中使用的数据集及其潜在偏见的清晰描述如何有助于减少伤害。当算法公开共享时,组织和个人可以在使用新工具之前更好地分析和了解新工具的潜在风险。
Transparency is also key. Earlier in this article, I explained how clear descriptions of the datasets used in AI and potential biases within them helps reduce harm. When algorithms are openly shared, organizations and individuals can better analyze and understand the potential risks of new tools before using them.
谁来确保安全和负责任的人工智能的问题目前尚未得到解答。例如,谷歌就聘用了一支符合道德的人工智能团队,但在 2020 年,格布鲁试图发表一篇论文,警告构建更大的语言模型存在风险后,该公司解雇了她。她从谷歌的退出引发了一个问题:技术开发人员是否有能力或有动力担任自己的技术和组织的监察员。最近,微软专注于道德的整个团队都被解雇了。10但业内许多人都认识到其中的风险,而且如前所述,即使是科技偶像也呼吁政策制定者与技术专家合作,创建监管体系来管理人工智能的发展。
The question of who will ensure safe and responsible AI is currently unanswered. Google, for example, employs an ethical-AI team, but in 2020 the company fired Gebru after she sought to publish a paper warning of the risks of building ever-larger language models. Her exit from Google raised the question of whether tech developers are able, or incentivized, to act as ombudsmen for their own technologies and organizations. More recently, an entire team at Microsoft focused on ethics was laid off.10 But many in the industry recognize the risks, and as noted earlier, even tech icons have called for policy makers working with technologists to create regulatory systems to govern AI development.
无论是来自政府、科技行业还是其他独立系统,监管机构的建立和保护对于防止人工智能伤害至关重要。
Whether it comes from government, the tech industry, or another independent system, the establishment and protection of watchdogs is crucial to protecting against AI harm.
即使人工智能格局发生变化,政府也在试图对其进行监管。在美国,去年有 21 项人工智能相关法案通过成为法律。值得注意的法案包括阿拉巴马州的一项规定,概述了在刑事诉讼中使用面部识别技术的指导方针,以及佛蒙特州的立法,该立法设立了人工智能部门,以审查州政府使用的所有人工智能,并提出州人工智能道德准则。 2023 年初,美国联邦政府开始颁布针对人工智能的行政措施,并将随着时间的推移进行审查。
Even as the AI landscape changes, governments are trying to regulate it. In the United States, 21 AI-related bills were passed into law last year. Notable acts include an Alabama provision outlining guidelines for using facial recognition technology in criminal proceedings and legislation in Vermont that created a Division of Artificial Intelligence to review all AI used by the state government and to propose a state AI code of ethics. In early 2023, the U.S. federal government moved to enact executive actions on AI, which will be vetted over time.
欧盟也在考虑立法——《人工智能法案》——其中包括一个分类系统,确定人工智能可能对个人健康和安全或基本权利造成的风险程度。意大利暂时禁止ChatGPT。非洲联盟成立了人工智能工作组,非洲人权和人民权利委员会通过了一项决议,以解决人工智能、机器人和其他新兴技术对非洲人权的影响。
The European Union is also considering legislation—the Artificial Intelligence Act—that includes a classification system determining the level of risk AI could pose to the health and safety or the fundamental rights of a person. Italy has temporarily banned ChatGPT. The African Union has established a working group on AI, and the African Commission on Human and Peoples’ Rights adopted a resolution to address implications for human rights of AI, robotics, and other new and emerging technologies in Africa.
中国于 2021 年通过了数据保护法,建立了数据收集的用户同意规则,并于最近通过了一项独特的政策规范“深度综合”用于所谓深度造假的技术”。英国政府发布了一项将现有监管指南应用于新人工智能技术的方法。
China passed a data protection law in 2021 that established user consent rules for data collection and recently passed a unique policy regulating “deep synthesis technologies” that are used for so-called deep fakes. The British government released an approach that applies existing regulatory guidelines to new AI technology.
•••
• • •
世界各地数十亿人正在通过 ChatGPT、Bing、Midjourney 和其他新工具的实验发现人工智能的前景。每家公司都必须面对这些新兴技术将如何应用于他们及其行业的问题。对于一些人来说,这意味着他们的运营模式将发生重大转变;对于其他人来说,这是一个扩大和扩大其产品范围的机会。但所有人都必须评估自己是否准备好负责任地部署人工智能,而不会对利益相关者和整个世界造成永久伤害。
Billions of people around the world are discovering the promise of AI through their experiments with ChatGPT, Bing, Midjourney, and other new tools. Every company will have to confront questions about how these emerging technologies will apply to them and their industries. For some it will mean a significant pivot in their operating models; for others, an opportunity to scale and broaden their offerings. But all must assess their readiness to deploy AI responsibly without perpetuating harm to their stakeholders and the world at large.
要点
TAKEAWAYS
生成式人工智能工具有望改变每个企业的运营方式。当您自己的组织开始制定使用哪些内容以及如何使用的策略时,运营和道德方面的考虑是不可避免的。本文深入探讨了其中八个:
Generative AI tools are poised to change the way every business operates. As your own organization begins strategizing about which to use and how, operational and ethical considerations are inevitable. This article delves into eight of them:
✓ 我应该如何准备在我的组织中引入人工智能?
✓ How should I prepare to introduce AI at my organization?
✓ 我们如何确保人工智能决策的透明度?
✓ How can we ensure transparency in how AI makes decisions?
✓ 我们如何在法学硕士周围竖起护栏,使他们的回答真实并与我们想要塑造的品牌形象一致?
✓ How can we erect guardrails around LLMs so that their responses are true and consistent with the brand image we want to project?
✓ 如何确保用于训练人工智能模型的数据集具有代表性且不包含有害偏差?
✓ How can we ensure that the dataset we use to train AI models is representative and doesn’t include harmful biases?
✓ 人工智能侵犯数据隐私的潜在风险有哪些?
✓ What are the potential risks of data privacy violations with AI?
✓ 我们如何鼓励员工使用人工智能来提高生产力,而不是简单地走捷径?
✓ How can we encourage employees to use AI for productivity purposes and not simply to take shortcuts?
✓ 我们应该有多担心人工智能会取代工作?
✓ How worried should we be that AI will replace jobs?
✓ 我的组织如何确保我们开发或使用的人工智能不会伤害个人或团体或侵犯人权?
✓ How can my organization ensure that the AI we develop or use won’t harm individuals or groups or violate human rights?
1 . Tsedal Neely,“遵循 30% 规则培养数字化思维”,LinkedIn,2022 年 5 月 12 日, https://
1. Tsedal Neely, “Developing a Digital Mindset by Following the 30% Rule,” LinkedIn, May 12, 2022, https://
2 . Callen Anthony、Beth A. Bechky 和 Anne-Laure Fayard,“与人工智能‘协作’:以系统视角探索工作的未来”,《组织科学》,2023 年 1 月 9 日, https://
2. Callen Anthony, Beth A. Bechky, and Anne-Laure Fayard, “ ‘Collaborating’ with AI: Taking a System View to Explore the Future of Work,” Organization Science, January 9, 2023, https://
3 . Thomas W. Malone、Daniela Rus 和 Robert Laubacher,“人工智能与工作的未来”,麻省理工学院研究简报 17(2020 年 12 月), https://
3. Thomas W. Malone, Daniela Rus, and Robert Laubacher, “Artificial Intelligence and the Future of Work,” MIT Research Brief 17 (December 2020), https://
4 . Emily M. Bender、Timnit Gebru、Angelina McMillan-Major 和 Shmargaret Shmitchell,“随机鹦鹉的危险:语言模型会太大吗?”, 2021 年 ACM 公平、问责和透明度会议记录,FAccT 21 日,纽约州纽约市,2021 年 3 月, https://
4. Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?,” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, New York, NY, March 2021, https://
5 . Robert M. Bond 和 R. Kelly Garrett,“参与 Reddit 上经过事实核查的帖子”, PNAS Nexus 2,第 1 期。 3 (2023年3月), https :
5. Robert M. Bond and R. Kelly Garrett, “Engagement with Fact-Checked Posts on Reddit,” PNAS Nexus 2, no. 3 (March 2023), https://
6 . “ 2021年社区洞察报告,蓬勃发展的运动” ,维基媒体Meta - Wiki ,https :
6. “Community Insights 2021 Report, Thriving Movement,” Wikimedia Meta-Wiki, https://
7 . Bender 等人,“论随机鹦鹉的危险”。
7. Bender et al., “On the Dangers of Stochastic Parrots.”
8 . Ann Cavoukian,“隐私设计:7 项基本原则”,privacybydesign.ca,2011 年 1 月, https://
8. Ann Cavoukian, “Privacy by Design: the 7 Foundational Principles,” privacybydesign.ca, January 2011, https://
9 . Joy Buolamwini 和 Timnit Gebru,“性别阴影:商业性别分类中的交叉准确性差异”,第一届公平、问责和透明度会议记录, PMLR 81 (2018):77–91, https:
9. Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81 (2018): 77–91, https://
10 . Rebecca Bellan,“微软在加倍投入 OpenAI 的同时解雇了道德 AI 团队”,TechCrunch,2023 年 3 月 13 日, https://
10. Rebecca Bellan, “Microsoft Lays Off an Ethical AI Team as It Doubles Down on OpenAI,” TechCrunch, March 13, 2023, https://
改编自 hbr.org 上发布的内容,2023 年 5 月 9 日(产品#H07MEI)。
Adapted from content posted on hbr.org, May 9, 2023 (product #H07MEI).
作者:凯西·巴克斯特和约夫·施莱辛格
by Kathy Baxter and Yoav Schlesinger
企业领导者、学者、政策制定者和无数其他人正在寻找利用生成式人工智能技术的方法。在商业领域,生成式人工智能有潜力改变公司与客户互动的方式并推动业务增长。新研究显示,67% 的高级 IT 领导者在未来 18 个月内优先考虑将生成式 AI 应用于其业务,其中三分之一 (33%) 将其列为重中之重,公司正在探索它如何影响业务的各个部分。1
Corporate leaders, academics, policy makers, and countless others are looking for ways to harness generative AI technology. In business, generative AI has the potential to transform the way companies interact with customers and drive business growth. New research shows 67% of senior IT leaders are prioritizing generative AI for their business within the next 18 months, with one-third (33%) naming it as a top priority, and companies are exploring how it could impact every part of the business.1
高级 IT 领导者需要一种值得信赖、数据安全的方式让员工使用这些技术。其中 79% 的领导者表示担心这些技术可能带来安全风险,另外 73% 的领导者担心结果会出现偏差。更广泛地说,组织必须认识到确保以道德、透明和负责任的方式使用这些技术的必要性。
Senior IT leaders need a trusted, data-secure way for their employees to use these technologies. Seventy-nine percent of these leaders reported concerns that these technologies bring the potential for security risks, and another 73% are concerned about biased outcomes. More broadly, organizations must recognize the need to ensure the ethical, transparent, and responsible use of these technologies.
在企业环境中使用生成式人工智能技术的企业不同于将其用于私人、个人用途的消费者。企业需要遵守与其各自行业相关的法规(例如医疗保健),如果生成的内容不准确、无法访问或具有攻击性,则会产生法律、财务和道德影响的雷区。例如,当生成式人工智能聊天机器人给出错误的烹饪步骤时,造成伤害的风险比向现场服务人员发出修理重型机械的指示时要低得多。如果没有按照明确的道德准则进行设计和部署,生成式人工智能可能会产生意想不到的后果,并可能造成真正的伤害。
A business using generative AI technology in an enterprise setting is different from consumers using it for private, individual use. Businesses need to adhere to regulations relevant to their respective industries (think health care), and there’s a minefield of legal, financial, and ethical implications if the content generated is inaccurate, inaccessible, or offensive. For example, the risk of harm when a generative AI chatbot gives incorrect steps for cooking a recipe is much lower than when giving a field-service worker instructions for repairing a piece of heavy machinery. If not designed and deployed with clear ethical guidelines, generative AI can have unintended consequences and potentially cause real harm.
组织需要一个清晰且可操作的框架来指导如何使用生成式人工智能,并将其生成式人工智能目标与企业的“待完成的工作”保持一致,包括生成式人工智能将如何影响销售、营销、商业、服务和 IT 工作。
Organizations need a clear and actionable framework for how to use generative AI and to align their generative AI goals with their businesses’ “jobs to be done,” including how generative AI will impact sales, marketing, commerce, service, and IT jobs.
2019 年,Salesforce 发布了我们值得信赖的原则(透明度、公平、责任、问责和可靠性),旨在指导道德 AI 工具的开发。这些可以适用于任何投资人工智能的组织。但如果组织缺乏道德的人工智能实践来将这些原则运用到人工智能技术的开发和采用中,那么这些原则只能发挥到如此程度。成熟的道德人工智能实践通过负责任的产品开发和部署(结合产品管理、数据科学、工程、隐私、法律、用户研究、设计和可访问性等学科)来实施其原则或价值观,以减轻人工智能的潜在危害并最大化其社会价值好处。组织如何启动、成熟和扩展这些实践有一些模型;这些模型为如何构建道德人工智能发展的基础设施提供了清晰的路线图。2
In 2019, we at Salesforce published our trusted principles (transparency, fairness, responsibility, accountability, and reliability), meant to guide the development of ethical AI tools. These can apply to any organization investing in AI. But these principles only go so far if organizations lack an ethical AI practice to operationalize them into the development and adoption of AI technology. A mature ethical AI practice operationalizes its principles or values through responsible product development and deployment—uniting disciplines such as product management, data science, engineering, privacy, legal, user research, design, and accessibility—to mitigate AI’s potential harms and maximize its social benefits. There are models for how organizations can start, mature, and expand these practices; these models provide clear road maps for how to build the infrastructure for ethical AI development.2
但随着生成式人工智能的主流出现和可及性,我们认识到组织需要针对该技术带来的风险的具体指导方针。这些指南不会取代我们的原则,而是在企业开发使用这项新技术的产品和服务时充当指南,指导如何实施和付诸实践。
But with the mainstream emergence—and accessibility—of generative AI, we recognized that organizations needed guidelines specific to the risks this technology presents. These guidelines don’t replace our principles, but instead act as a North Star for how they can be operationalized and put into practice as businesses develop products and services that use this new technology.
随着这些工具获得主流采用,我们的一套新指南可以帮助组织评估生成式人工智能的风险和注意事项。它们涵盖五个重点领域。
Our new set of guidelines can help organizations evaluate generative AI’s risks and considerations as these tools gain mainstream adoption. They cover five focus areas.
组织需要能够根据自己的数据训练人工智能模型,以提供可验证的结果,平衡准确性、精确度和召回率(模型能够正确识别给定数据集中的阳性案例的能力)。当生成人工智能响应存在不确定性时,进行沟通并使人们能够验证它们非常重要。这可以通过引用模型用来创建内容的信息来源、解释人工智能为何做出响应、强调不确定性以及创建防止某些任务完全自动化的护栏来完成。
Organizations need to be able to train AI models on their own data to deliver verifiable results that balance accuracy, precision, and recall (the model’s ability to correctly identify positive cases within a given dataset). It’s important to communicate when there is uncertainty regarding generative AI responses and enable people to validate them. This can be done by citing the sources of information the model is using to create content, explaining why the AI gave the response it did, highlighting uncertainty, and creating guardrails that prevent some tasks from being fully automated.
尽一切努力通过进行偏见、可解释性和稳健性来减轻偏见、毒性和有害输出状态评估始终是人工智能的首要任务。组织必须保护用于培训的数据中任何个人识别信息的隐私,以防止潜在的伤害。此外,安全评估可以帮助组织识别可能被不良行为者利用的漏洞。
Making every effort to mitigate bias, toxicity, and harmful outputs by conducting bias, explainability, and robustness assessments is always a priority in AI. Organizations must protect the privacy of any personally identifying information in the data used for training to prevent potential harm. Further, security assessments can help organizations identify vulnerabilities that may be exploited by bad actors.
在收集数据来训练和评估我们的模型时,请尊重数据来源并确保获得使用该数据的同意。这可以通过利用开源和用户提供的数据来完成。而且,当自主交付输出时,人工智能创建内容的情况必须保持透明。这可以通过内容上的水印或通过应用内消息传递来完成。
When collecting data to train and evaluate our models, respect data provenance and ensure there is consent to use that data. This can be done by leveraging open-source and user-provided data. And, when autonomously delivering outputs, it’s necessary to be transparent that an AI has created the content. This can be done through watermarks on the content or through in-app messaging.
虽然在某些情况下最好完全自动化流程,但人工智能应该更多地发挥支持作用。如今,生成式人工智能是一个很好的助手。在金融或医疗保健等以建立信任为首要任务的行业中,人类的参与非常重要在人工智能模型可能提供的数据驱动洞察的帮助下,在决策过程中建立信任并保持透明度。此外,确保模型的输出可供所有人访问(例如,生成伴随图像的替代文本、屏幕阅读器可访问文本输出)。当然,人们必须尊重内容贡献者、创作者和数据标签者(例如,公平的工资、同意使用他们的作品)。
While there are some cases where it is best to fully automate processes, AI should more often play a supporting role. Today, generative AI is a great assistant. In industries where building trust is a top priority, such as in finance or health care, it’s important that humans be involved in decision-making—with the help of data-driven insights that an AI model may provide—to build trust and maintain transparency. Additionally, ensure the model’s outputs are accessible to all (e.g., generate alt text to accompany images, text output is accessible to a screen reader). And of course, one must treat content contributors, creators, and data labelers with respect (e.g., fair wages, consent to use their work).
根据语言模型使用的值或参数的数量,语言模型被描述为“大”。其中一些大型语言模型有数千亿个参数,训练它们需要大量的能源和水。例如,GPT-3 需要 1.287 吉瓦时,即为 120 个美国家庭提供一年所需的电力,以及 700,000 升干净的淡水。3
Language models are described as “large” based on the number of values or parameters they use. Some of these large language models have hundreds of billions of parameters, and it takes a lot of energy and water to train them. For example, GPT-3 took 1.287 gigawatt hours, or about as much electricity to power 120 U.S. homes for a year, and 700,000 liters of clean fresh water.3
在考虑人工智能模型时,越大并不总是意味着越好。在开发自己的模型时,我们将通过在大量高质量 CRM 数据上对模型进行训练,努力最小化模型的大小,同时最大限度地提高准确性。这将有助于减少碳足迹,因为需要更少的计算,意味着数据中心的能源消耗和碳排放更少。
When considering AI models, larger doesn’t always mean better. As we develop our own models, we will strive to minimize the size of our models while maximizing accuracy by training on models on large amounts of high-quality CRM data. This will help reduce the carbon footprint because less computation is required, which means less energy consumption from data centers and carbon emission.
大多数组织将集成生成式人工智能工具,而不是构建自己的工具。以下是一些将生成式人工智能安全地集成到业务应用程序中以推动业务成果的战术技巧:
Most organizations will integrate generative AI tools rather than build their own. Here are some tactical tips for safely integrating generative AI in business applications to drive business results:
公司应该使用零方数据(客户主动共享的数据)和他们直接收集的第一方数据来训练生成式人工智能工具。强大的数据来源是确保模型准确、原创和可信的关键。依靠第三方数据或从外部来源获得的信息来训练人工智能工具很难确保输出的准确性。
Companies should train generative AI tools using zero-party data—data that customers share proactively—and first-party data, which they collect directly. Strong data provenance is key to ensuring that models are accurate, original, and trusted. Relying on third-party data—or information obtained from external sources—to train AI tools makes it difficult to ensure that output is accurate.
例如,数据经纪人可能拥有旧数据,错误地组合来自不属于同一个人的设备或帐户的数据,或者根据数据做出不准确的推断。这适用于我们的客户我们将模型建立在他们的数据基础上。如果客户CRM中的数据全部来自数据经纪人,那么个性化可能是错误的。
For example, data brokers may have old data, incorrectly combine data from devices or accounts that don’t belong to the same person, or make inaccurate inferences based on the data. This applies for our customers when we are grounding the models in their data. If the data in a customer’s CRM all came from data brokers, the personalization may be wrong.
人工智能的好坏取决于它所训练的数据。如果模型所基于的内容陈旧、不完整且不准确,则生成对客户支持查询的响应的模型将产生不准确或过时的结果,从而导致“幻觉”并将虚假陈述为事实。包含偏差的训练数据将产生传播偏差的工具。
AI is only as good as the data it’s trained on. Models that generate responses to customer support queries will produce inaccurate or out-of-date results if the content it’s grounded in is old, incomplete, and inaccurate, leading to “hallucinations” and stating falsehood as fact. Training data that contains bias will result in tools that propagate bias.
公司必须审查将用于训练模型的所有数据集和文档,并消除有偏见、有毒和虚假的元素。这一管理过程是安全性和准确性原则的关键。
Companies must review all datasets and documents that will be used to train models and remove biased, toxic, and false elements. This process of curation is key to principles of safety and accuracy.
仅仅因为某件事可以自动化并不意味着它应该如此。生成式人工智能工具并不总是能够理解情感或业务背景,或者知道它们何时出错或具有破坏性。
Just because something can be automated doesn’t mean it should be. Generative AI tools aren’t always capable of understanding emotional or business context or knowing when they’re wrong or damaging.
人类需要参与审查输出的准确性、消除偏见并确保模型按预期运行。更广泛地说,生成式人工智能应该被视为增强人类能力和增强社区能力的一种方式,而不是取代或取代它们。
Humans need to be involved to review outputs for accuracy, suss out bias, and ensure models are operating as intended. More broadly, generative AI should be seen as a way to augment human capabilities and empower communities, not replace or displace them.
公司在负责任地采用生成式人工智能并以增强而不是削弱员工和客户的工作体验的方式集成这些工具方面发挥着关键作用。这又回到了确保负责任地使用人工智能来保持准确性、安全性、诚实性、赋权性和可持续性;降低风险;并消除有偏见的结果。这种承诺应该超越直接的企业利益,涵盖更广泛的社会责任和道德人工智能实践。
Companies play a critical role in responsibly adopting generative AI and integrating these tools in ways that enhance, not diminish, the working experience of their employees and their customers. This comes back to ensuring the responsible use of AI in maintaining accuracy, safety, honesty, empowerment, and sustainability; mitigating risks; and eliminating biased outcomes. And the commitment should extend beyond immediate corporate interests, encompassing broader societal responsibilities and ethical AI practices.
生成式人工智能不能在“一劳永逸”的基础上运行——这些工具需要持续的监督。公司可以首先通过收集人工智能系统的元数据并针对特定风险制定标准缓解措施来寻找自动化审核流程的方法。
Generative AI cannot operate on a set-it-and-forget-it basis—the tools need constant oversight. Companies can start by looking for ways to automate the review process by collecting metadata on AI systems and developing standard mitigations for specific risks.
最终,人类还需要参与检查输出的准确性、偏差和幻觉。公司可以考虑投资对一线工程师和管理人员进行道德人工智能培训,以便他们做好评估人工智能工具的准备。如果资源有限,他们可以优先测试最有可能造成伤害的模型。
Ultimately, humans also need to be involved in checking output for accuracy, bias, and hallucinations. Companies can consider investing in ethical AI training for frontline engineers and managers so they’re prepared to assess AI tools. If resources are constrained, they can prioritize testing models that have the most potential to cause harm.
倾听员工、值得信赖的顾问和受影响社区的意见是识别风险和纠正方向的关键。公司可以为员工创建各种报告问题的途径,例如匿名热线、邮件列表、专用的 Slack 或社交媒体渠道或焦点小组。鼓励员工报告问题也很有效。
Listening to employees, trusted advisers, and impacted communities is key to identifying risks and course-correcting. Companies can create a variety of pathways for employees to report concerns, such as an anonymous hotline, a mailing list, a dedicated Slack or social media channel, or focus groups. Creating incentives for employees to report issues can also be effective.
一些组织已经成立了道德咨询委员会——由全公司员工、外部专家或两者混合组成——来权衡人工智能的发展。最后,与社区利益相关者建立开放的沟通渠道是避免意外后果的关键。
Some organizations have formed ethics advisory councils—composed of employees from across the company, external experts, or a mix of both—to weigh in on AI development. Finally, having open lines of communication with community stakeholders is key to avoiding unintended consequences.
•••
• • •
随着生成式人工智能成为主流,企业有责任确保他们使用这项技术道德上的科学并减轻潜在的危害。通过遵守指导方针并提前构建护栏,公司可以确保他们部署的工具准确、安全且值得信赖,并帮助人类繁荣发展。
With generative AI going mainstream, enterprises have the responsibility to ensure that they’re using this technology ethically and mitigating potential harm. By committing to guidelines and constructing guardrails in advance, companies can ensure that the tools they deploy are accurate, safe, and trusted—and that they help humans flourish.
生成式人工智能正在快速发展,因此企业需要采取的具体步骤将随着时间的推移而发展。但坚持坚定的道德框架可以帮助组织度过这一快速转型时期。
Generative AI is evolving quickly, so the concrete steps businesses need to take will evolve over time. But sticking to a firm ethical framework can help organizations navigate this period of rapid transformation.
要点
TAKEAWAYS
企业采用生成式人工智能会带来道德风险。为了关注这些风险并采取必要措施减少风险,组织必须优先考虑负责任地使用生成人工智能,确保其准确、安全、诚实、赋权和可持续。
The adoption of generative AI by businesses comes with ethical risk. To be mindful of these risks and to take necessary steps to reduce them, organizations must prioritize the responsible use of generative AI by ensuring it is accurate, safe, honest, empowering, and sustainable.
✓ 应积极鼓励人类监督和参与决策过程,以确保负责任地使用生成式人工智能。
✓ Human oversight and participation in decision-making processes should be actively encouraged to ensure that generative AI is used responsibly.
✓ 透明、公平、责任、问责、可靠是值得信赖的人工智能原则由 Salesforce 宣布。这些原则适用于任何进行人工智能投资的公司。
✓ Transparency, fairness, responsibility, accountability, and reliability are the trusted AI principles announced by Salesforce. These principles are applicable to any company making an AI investment.
✓ 负责任地整合生成式人工智能和降低道德风险的策略包括使用第一方或零方数据、维护更新且标记良好的数据、让人类参与流程、迭代测试模型以及征求内部和外部顾问的意见。
✓ Strategies for responsibly integrating generative AI and reducing ethical risk include using first-party or zero-party data, maintaining updated and well-labeled data, involving humans in the process, iteratively testing models, and soliciting input from internal and external advisers.
1 .生成式 AI 为‘游戏规则改变者’ ,但寻求道德和信任方面的进步”,salesforce.com,2023 年 3 月 6 日, https :
1. “IT Leaders Call Generative AI a ‘Game Changer’ but Seek Progress on Ethics and Trust,” salesforce.com, March 6, 2023, https://
2 . “Salesforce 首次推出 AI 道德模型:如何道德实践进一步负责任的人工智能”, salesforce.com,2021 年 9 月 2 日, https://
2. “Salesforce Debuts AI Ethics Model: How Ethical Practices Further Responsible Artificial Intelligence,” salesforce.com, September 2, 2021, https://
3 . Saul 和 Dina Bass,“人工智能正在蓬勃发展——它的碳足迹也是如此”,彭博社,2023年3月9日, https :
3. Saul and Dina Bass, “Artificial Intelligence Is Booming—So Is Its Carbon Footprint,” Bloomberg, March 9, 2023, https://
改编自 hbr.org 上发布的内容,2023 年 6 月 5 日(产品#H07OC4)。
Adapted from content posted on hbr.org, June 5, 2023 (product #H07OC4).
作者:埃里克·西格尔
by Eric Siegel
您可能认为“人工智能重大突破”的消息除了有助于机器学习 (ML) 的采用之外没有任何作用。要是。即使在最新的轰动事件之前——最著名的是 OpenAI 的 ChatGPT 和其他生成式人工智能工具——关于新兴的、强大的人工智能的丰富叙述已经成为应用机器学习的一个日益严重的问题。这是因为对于大多数机器学习项目来说,人工智能这个流行词太过分了。它过度夸大了期望并且分散了人们对机器学习改善业务运营的精确方式的注意力。
You might think that news of “major AI breakthroughs” would do nothing but help machine learning’s (ML) adoption. If only. Even before the latest splashes—most notably OpenAI’s ChatGPT and other generative AI tools—the rich narrative about an emerging, all-powerful AI was already a growing problem for applied ML. That’s because for most ML projects, the buzzword AI goes too far. It overly inflates expectations and distracts from the precise way ML will improve business operations.
机器学习的大多数实际用例(旨在提高现有业务运营的效率)都以相当简单的方式进行创新。不要让这种炫目的技术掩盖了其基本职责的简单性:机器学习的目的是发出可操作的预测,这就是为什么它有时也被称为预测分析。这意味着真正的价值,只要你避免虚假宣传它是“高度准确的”,就像数字水晶球一样。
Most practical use cases of ML—designed to improve the efficiencies of existing business operations—innovate in fairly straightforward ways. Don’t let the glare emanating from this glitzy technology obscure the simplicity of its fundamental duty: The purpose of ML is to issue actionable predictions, which is why it’s sometimes also called predictive analytics. This means real value, as long as you eschew false hype that it is “highly accurate,” like a digital crystal ball.
这种能力可以以简单的方式转化为有形价值。这些预测推动了数百万个运营决策。例如,通过预测哪些客户最有可能取消预订,公司可以为这些客户提供留下来的激励措施。通过预测哪些信用卡交易是欺诈性的,卡处理商可以禁止这些交易。它是实用的 ML 用例,例如那些对现有业务运营产生最大影响的用例,以及此类项目应用的高级数据科学方法都归结为 ML(而且只有 ML)。
This capability translates into tangible value in an uncomplicated manner. The predictions drive millions of operational decisions. For example, by predicting which customers are most likely to cancel, a company can provide those customers incentives to stick around. And by predicting which credit-card transactions are fraudulent, a card processor can disallow them. It’s practical ML use cases like those that deliver the greatest impact on existing business operations, and the advanced data science methods that such projects apply boil down to ML—and only ML.
问题是:大多数人将机器学习视为“人工智能”。这是一个合理的误解。但“人工智能”它遭受着一种无情的、无法治愈的模糊性——它是一个包罗万象的艺术术语,并不始终指代任何特定的方法或价值主张。将 ML 工具称为“AI”夸大了大多数 ML 业务部署的实际用途。事实上,当你称某个东西为“人工智能”时,你再怎么承诺也不过分。这个绰号引用了通用人工智能(AGI)的概念,即能够完成人类可以完成的任何智力任务的软件。
Here’s the problem: Most people conceive of ML as “AI.” This is a reasonable misunderstanding. But “AI” suffers from an unrelenting, incurable case of vagueness—it is a catch-all term of art that does not consistently refer to any particular method or value proposition. Calling ML tools “AI” oversells what most ML business deployments actually do. In fact, you couldn’t overpromise more than you do when you call something “AI.” The moniker invokes the notion of artificial general intelligence (AGI), software capable of any intellectual task humans can do.
这加剧了机器学习项目的一个重大问题:它们往往缺乏对其价值的敏锐关注——机器学习究竟如何使业务流程更加有效。因此,大多数机器学习项目无法交付价值。1相比之下,将具体运营目标放在首位的机器学习项目很有可能实现该目标。
This exacerbates a significant problem with ML projects: They often lack a keen focus on their value—exactly how ML will render business processes more effective. As a result, most ML projects fail to deliver value.1 In contrast, ML projects that keep their concrete operational objective front and center stand a good chance of achieving that objective.
“‘人工智能驱动’是科技领域毫无意义的‘纯天然’。 ”
“ ‘AI-powered’ is tech’s meaningless equivalent of ‘all natural.’ ”
—Devin Coldewey, TechCrunch,2022 年
—Devin Coldewey, TechCrunch, 2022
人工智能无法摆脱通用人工智能有两个原因。首先,这个术语通常被随意使用,而没有明确我们是在谈论 AGI 还是狭义 AI(一个术语)这本质上意味着实用、专注的机器学习部署。尽管存在巨大差异,但它们之间的界限在常见的言论和软件销售材料中却很模糊。
AI cannot get away from AGI for two reasons. First, the term is generally thrown around without clarifying whether we’re talking about AGI or narrow AI, a term that essentially means practical, focused ML deployments. Despite the tremendous differences, the boundary between them blurs in common rhetoric and software sales materials.
其次,除了AGI之外,还没有令人满意的方式来定义AI。将人工智能定义为 AGI 之外的事物本身已成为一项研究挑战,尽管这是一个堂吉诃德式的挑战。如果它不意味着 AGI,那就没有任何意义——其他建议的定义要么无法符合“人工智能”所隐含的雄心勃勃的精神中的“智能”,要么无法建立客观的目标。我们面临着这个难题,无论是试图确定(1)人工智能的定义,(2)计算机“智能”的标准,还是(3)证明真正人工智能的性能基准。这三者是一回事。
Second, there’s no satisfactory way to define AI besides AGI. Defining AI as something other than AGI has become a research challenge unto itself, albeit a quixotic one. If it doesn’t mean AGI, it doesn’t mean anything—other suggested definitions either fail to qualify as “intelligent” in the ambitious spirit implied by “AI” or fail to establish an objective goal. We face this conundrum whether trying to pinpoint (1) a definition for AI, (2) the criteria by which a computer would qualify as “intelligent,” or (3) a performance benchmark that would certify true AI. These three are one and the same.
问题出在“智力”这个词本身。当用来描述一台机器时,它是极其模糊的。如果人工智能本来就是一个合法的领域,那么这就是个坏消息。工程不能追求不精确的目标。如果无法定义它,就无法构建它。为了开发一个装置,你必须能够衡量它有多好——它的表现如何以及你离目标有多近——这样你就知道你正在取得进展,并且最终知道你何时取得了成功。开发它。
The problem is with the word intelligence itself. When used to describe a machine, it’s relentlessly nebulous. That’s bad news if AI is meant to be a legitimate field. Engineering can’t pursue an imprecise goal. If you can’t define it, you can’t build it. To develop an apparatus, you must be able to measure how good it is—how well it performs and how close you are to the goal—so that you know you’re making progress and so that you ultimately know when you’ve succeeded in developing it.
为了摆脱这种困境,该行业不断地表演着一场尴尬的人工智能定义之舞,但徒劳无功。我称之为AI 洗牌。人工智能意味着计算机可以智能地完成某些事情(循环定义)。不,它是机器展示的智能(如果可能的话,甚至更循环)。相反,它是一个采用某些先进方法的系统,例如机器学习、自然语言处理、基于规则的系统、语音识别、计算机视觉或其他以概率方式运行的技术(显然,采用这些方法中的一种或多种并不自动使系统具有智能性)。
In a vain attempt to fend off this dilemma, the industry continually performs an awkward dance of AI definitions that I call the AI shuffle. AI means computers that do something smart (a circular definition). No, it’s intelligence demonstrated by machines (even more circular, if that’s possible). Rather, it’s a system that employs certain advanced methodologies, such as ML, natural language processing, rule-based systems, speech recognition, computer vision, or other techniques that operate probabilistically (clearly, employing one or more of these methods doesn’t automatically qualify a system as intelligent).
但是,如果一台机器看起来足够像人类,如果你无法通过在聊天室中询问它(著名的图灵测试)来区分它和人类,那么它肯定有资格成为智能机器。但愚弄人们的能力是一个任意的、不断变化的目标,因为随着时间的推移,人类受试者会变得更加聪明。任何给定的系统最多只能通过一次测试——愚弄我们两次,这是人类的耻辱。通过图灵测试未能达到目标的另一个原因是愚弄人们的价值或效用有限。如果人工智能能够存在,它肯定会有用。
But surely a machine would qualify as intelligent if it seemed sufficiently humanlike, if you couldn’t distinguish it from a human, say, by interrogating it in a chatroom—the famous Turing test. But the ability to fool people is an arbitrary, moving target, since human subjects become wiser to the trickery over time. Any given system will only pass the test at most once—fool us twice, shame on humanity. Another reason that passing the Turing test misses the mark is because there’s limited value or utility in fooling people. If AI could exist, certainly it’s supposed to be useful.
如果我们根据人工智能的能力来定义它呢?例如,如果我们将人工智能定义为可以执行传统上需要人类完成的困难任务的软件,例如驾驶汽车、掌握国际象棋或识别人脸。事实证明这个定义也不起作用,因为一旦计算机可以做某事,我们倾向于轻视它。毕竟,计算机只能管理易于理解和明确指定的机械任务。一旦克服了挑战,成就就会突然失去魅力,而能够做到这一点的计算机似乎并不“智能”——至少没有达到人工智能一词所期望的全心全意的程度。一旦计算机掌握了国际象棋,我们就几乎没有感觉我们已经“解决”了人工智能。
What if we define AI by what it’s capable of? For example, if we define AI as software that can perform a task so difficult that it traditionally requires a human, such as driving a car, mastering chess, or recognizing human faces. It turns out that this definition doesn’t work either because, once a computer can do something, we tend to trivialize it. After all, computers can manage only mechanical tasks that are well understood and well specified. Once the challenge is surmounted, the accomplishment suddenly loses its charm and the computer that can do it doesn’t seem “intelligent” after all—at least not to the wholehearted extent intended by the term AI. Once computers mastered chess, there was little feeling that we’d “solved” AI.
这个悖论被称为人工智能效应,它告诉我们,如果可能的话,它就不是智能的。由于目标始终难以捉摸,人工智能无意中等同于“让计算机做计算机难以做到的事情”——人工不可能。一旦到达,任何目的地都不会令你满意;人工智能完全违背了定义。具有讽刺意味的是,计算机科学先驱拉里·特斯勒(Larry Tesler)提出了一句著名的建议,我们不妨将人工智能定义为“机器尚未完成的事情”。
This paradox, known as the AI effect, tells us that if it’s possible, it’s not intelligent. Suffering from an ever-elusive objective, AI inadvertently equates to “getting computers to do things too difficult for computers to do”—artificial impossibility. No destination will satisfy once you arrive; AI categorically defies definition. With due irony, the computer science pioneer Larry Tesler famously suggested that we might as well define AI as “whatever machines haven’t done yet.”
讽刺的是,正是机器学习的巨大成功首先推动了人工智能的发展。毕竟,简而言之,提高可衡量的性能就是监督机器学习。根据基准(例如标记数据样本)评估系统的反馈将指导其下一步改进。通过这个过程,机器学习以无数的方式提供了前所未有的价值。它被誉为“我们最重要的通用技术”正如安德鲁·麦卡菲和埃里克·布林约尔松所说的那样。2最重要的是,机器学习的飞跃性发展推动了人工智能的炒作。
Ironically, it was ML’s measurable success that hyped up AI in the first place. After all, improving measurable performance is supervised machine learning in a nutshell. The feedback from evaluating the system against a benchmark—such as a sample of labeled data—guides its next improvement. By this process, ML delivers unprecedented value in countless ways. It has earned its title as “the most important general-purpose technology of our era,” as Andrew McAfee and Erik Brynjolfsson put it.2 More than anything else, ML’s proven leaps and bounds have fueled AI hype.
“我预测未来五年内我们将迎来第三个人工智能冬天。……当我在 91 年获得人工智能和机器学习博士学位时,人工智能实际上是一个坏词。没有公司会考虑雇用人工智能领域的人。”
“I predict we will see the third AI winter within the next five years.… When I graduated with my PhD in AI and ML in ’91, AI was literally a bad word. No company would consider hiring somebody who was in AI.”
—Usama Fayyad,2022 年 6 月 23 日,在机器学习周上发言
—Usama Fayyad, June 23, 2022, speaking at Machine Learning Week
有一种方法可以克服这种定义困境:全力以赴,将“AI”定义为AGI,即能够完成人类可以完成的任何智力任务的软件。如果这个听起来像科幻小说的目标得以实现,我认为将会有一个强有力的论据证明它具有“智能”。这是一个可衡量的目标——至少在原则上是这样,如果不是在实践中的话。例如,它的开发人员可以针对一组 100 万个任务对系统进行基准测试,其中包括您可能发送给虚拟助理的数万个复杂的电子邮件请求、您可能向机器人发出的针对仓库员工的各种指令,甚至用一段简短的概述来说明机器作为首席执行官应该如何运营一家财富500 强公司以实现盈利。
There is one way to overcome this definition dilemma: Go all in and define “AI” as AGI, software capable of any intellectual task humans can do. If this science fiction–sounding goal were achieved, I submit that there would be a strong argument that it qualified as “intelligent.” And it’s a measurable goal—at least in principle, if not in practicality. For example, its developers could benchmark the system against a set of 1 million tasks, including tens of thousands of complicated email requests you might send to a virtual assistant, various instructions for a warehouse employee you’d just as well issue to a robot, and even brief one-paragraph overviews for how the machine should, in the role of CEO, run a Fortune 500 company to profitability.
AGI 可能设定了一个明确的目标,但它是超凡脱俗的——一个难以实现的野心。没有人知道它是否以及何时能够实现。
AGI may set a clear-cut objective, but it’s out of this world—as unwieldy an ambition as there can be. Nobody knows if and when it could be achieved.
这就是典型机器学习项目的问题。通过将它们称为“人工智能”,我们表达了它们与 AGI 处于同一范围,它们是建立在积极朝这个方向发展的技术之上的。 “AI”困扰着机器学习。它引发了宏大的叙事并激发了人们的期望,以不切实际的方式出售真正的技术。这让决策者和陷入困境的项目感到困惑。
Therein lies the problem for typical ML projects. By calling them “AI,” we convey that they sit on the same spectrum as AGI, that they’re built on technology that is actively inching along in that direction. “AI” haunts ML. It invokes a grandiose narrative and pumps up expectations, selling real technology in unrealistic terms. This confuses decision-makers and dead-end projects left and right.
如果 AI 的成分与 AGI 相同,那么很多人都想分一杯羹,这是可以理解的。 AGI 所承诺的愿望实现——一种终极力量——如此诱人,几乎无法抗拒。
It’s understandable that so many would want to claim a piece of the AI pie, if it’s made of the same ingredients as AGI. The wish fulfillment AGI promises—a kind of ultimate power—is so seductive that it’s nearly irresistible.
但有一种更好的前进方式,一种现实的方式,我认为它已经足够令人兴奋了:更有效地运行主要业务——我们作为组织所做的主要事情!大多数商业机器学习项目的目标就是做到这一点。为了让他们以更高的速度取得成功,我们必须脚踏实地。如果您的目标是提供运营价值,请不要购买“人工智能”,也不要出售“人工智能”。说出您的意思并表达您的意思。如果一项技术由机器学习组成,我们就这样称呼它。
But there’s a better way forward, one that’s realistic and that I would argue is already exciting enough: running major operations—the main things we do as organizations—more effectively! Most commercial ML projects aim to do just that. For them to succeed at a higher rate, we’ve got to come down to earth. If your aim is to deliver operational value, don’t buy “AI” and don’t sell “AI.” Say what you mean and mean what you say. If a technology consists of ML, let’s call it that.
关于人类思维即将过时的报道被严重夸大了,这意味着人工智能幻灭的另一个时代即将到来。而且,从长远来看,只要我们继续夸张地应用人工智能这个术语,我们就将继续经历人工智能的冬天。但如果我们淡化言辞——或者以其他方式将机器学习与人工智能区分开来——我们就能正确地将机器学习作为一个行业与下一个人工智能冬天隔离开来。这包括抵制诱惑,不要随波逐流,不要被动地肯定那些似乎在全能人工智能的祭坛上低头的不切实际的决策者。否则,危险就显而易见:当炒作消退、过度销售被揭穿、冬天到来时,机器学习的大部分真正价值主张将与神话一起被不必要地抛弃,就像婴儿被洗澡水一样。
Reports of the human mind’s looming obsolescence have been greatly exaggerated, which means another era of AI disillusionment is nigh. And, in the long run, we will continue to experience AI winters as long as we continue to hyperbolically apply the term AI. But if we tone down the rhetoric—or otherwise differentiate ML from AI—we will properly insulate ML as an industry from the next AI winter. This includes resisting the temptation to ride hype waves and refrain from passively affirming starry-eyed decision-makers who appear to be bowing at the altar of an all-capable AI. Otherwise, the danger is clear and present: When the hype fades, the overselling is debunked, and winter arrives, much of ML’s true value proposition will be unnecessarily disposed of along with the myths, like the baby with the bathwater.
要点
TAKEAWAYS
随着生成式人工智能每隔几个月就会发布令人惊叹的新功能,并且人工智能炒作以更高的速度升级,现在是我们将当今大多数实用机器学习 (ML) 项目与生成式人工智能的进步区分开来的时候了。
With breathtaking new capabilities from generative AI released every several months—and AI hype escalating at an even higher rate—it’s high time we differentiate most of today’s practical machine learning (ML) projects from generative AI’s advances.
✓ 对于大多数机器学习项目来说, “人工智能”这个词太过分了。它指的是人类水平的能力,更好地描述为 AGI(通用人工智能)——能够完成人类可以完成的任何智力任务的软件——而且没有人知道 AGI 是否以及何时能够实现。
✓ For most ML projects, the term AI goes entirely too far. It alludes to human-level capabilities that are better described as AGI (artificial general intelligence)—software capable of any intellectual task humans can do—and no one knows if and when AGI could ever be achieved.
✓ 事实上,机器学习计划在用于优化现有流程时最为有效;这些解决方案类型可以为企业提供最大的投资回报。
✓ In fact, ML initiatives are most effective when used to optimize existing processes; these are the types of solutions that provide the greatest return on investment for businesses.
✓ 将所有 ML 计划纳入“AI”保护伞下,存在过度宣传和误导,导致 ML 业务部署的失败率很高。
✓ Including all ML initiatives under the “AI” umbrella oversells and misleads, contributing to a high failure rate for ML business deployments.
1 . Eric Siegel,“模型很少部署:机器学习领导力的全行业失败”,KD Nuggets,2022 年 1 月 17 日, https://
1. Eric Siegel, “Models Are Rarely Deployed: An Industry-Wide Failure in Machine Learning Leadership,” KD Nuggets, January 17, 2022, https://
2 . Erik Brynjolfsson 和 Andrew McAfee,“人工智能商业”,hbr.org,2017 年 7 月 18 日, https://
2. Erik Brynjolfsson and Andrew McAfee, “The Business of Artificial Intelligence,” hbr.org, July 18, 2017, https://
改编自 hbr.org 上发布的内容,2023 年 6 月 2 日(产品#H07NQA)。
Adapted from content posted on hbr.org, June 2, 2023 (product #H07NQA).
本文是作者作为弗吉尼亚大学达顿商学院分析学身体二百周年教授的工作成果。
This article is a product of the author’s work as the Bodily Bicentennial Professor in Analytics at University of Virginia Darden School of Business.
马克·亚伯拉罕 (MARK ABRAHAM)是波士顿咨询集团的董事总经理兼高级合伙人。
MARK ABRAHAM is a managing director and a senior partner at Boston Consulting Group.
OGUZ A. ACAR是伦敦国王学院国王商学院营销系主任。
OGUZ A. ACAR is a chair in marketing at King’s Business School, King’s College London.
GIL APPEL是乔治华盛顿大学商学院营销学助理教授。他的研究揭示了消费者与大数据、社交媒体、NFT 和人工智能等数字技术互动所驱动的见解。
GIL APPEL is an assistant professor of marketing at the George Washington University School of Business. His research uncovers insights driven by consumer interactions with digital technologies, such as big data, social media, NFTs, and AI.
KATHY BAXTER是 Salesforce 道德人工智能实践的首席架构师,致力于开发基于研究的最佳实践,以教育 Salesforce 员工、客户和行业发展负责任的人工智能。她与外部人工智能和道德专家合作,不断发展 Salesforce 政策、实践和产品。她是新加坡人工智能和数据道德使用咨询委员会的成员,也是NIST 访问 AI 研究员,并且是 EqualAI 董事会成员。在加入 Salesforce 之前,她曾在 Google、eBay 和 Oracle 从事用户体验研究。她是《了解你的用户:用户研究方法实用指南》一书的合著者。
KATHY BAXTER is the principal architect of ethical AI practice at Salesforce, developing research-informed best practices to educate Salesforce employees, customers, and the industry on the development of responsible AI. She collaborates and partners with external AI and ethics experts to continuously evolve Salesforce policies, practices, and products. She is a member of Singapore’s Advisory Council on the Ethical Use of AI and Data and a Visiting AI Fellow at NIST and is on the board of EqualAI. Prior to Salesforce, she worked at Google, eBay, and Oracle in user experience research. She is a coauthor of Understanding Your Users: A Practical Guide to User Research Methodologies.
Nicola MORINI BIANZINO是安永全球首席技术官,专注于为安永客户提供技术产品,将技术定位于组织的核心,为全球客户提供技术投资和创新议程建议,并提供工业化技术产品以满足他们最紧迫的业务需求。作为早期的人工智能先驱,他于 1997 年写了一篇关于神经网络在商业中的应用的论文。他拥有佛罗伦萨大学人工智能和经济学硕士学位。
NICOLA MORINI BIANZINO is the EY global CTO, focused on bringing technology products to EY clients, positioning technology at the heart of the organization, advising global clients on technology investment and their innovation agendas, and providing industrialized technology products to meet their most pressing business needs. An early AI pioneer, he wrote a thesis on the application of neural networks to business in 1997. He holds a master’s degree in artificial intelligence and economics from the University of Florence.
DAVID DE CREMER是 D'Amore McKim 商学院邓顿家族院长、美国东北大学管理学教授。在加入东北大学之前,他曾担任英国剑桥大学毕马威管理研究系首席教授、新加坡国立大学商学院管理与组织学教务长教授,同时也是新加坡国立大学商学院管理与组织学研究中心的创始人和主任。人工智能技术造福人类。他是一个Thinkers50 Radar 思想领袖,跻身全球前 2% 的科学家行列,也是全球前 30 名管理演讲者。他是《The AI-Savvy Leader》一书的作者(哈佛商业评论出版社,2024 年)。他的网站是www.daviddecremer.com 。
DAVID DE CREMER is the Dunton Family Dean of D’Amore McKim School of Business and professor of management at Northeastern University (U.S.). Before moving to Northeastern University, he was the KPMG chaired professor in management studies at Cambridge University (U.K.) and a provost chaired professor in management and organizations at NUS Business School (Singapore), where he was also the founder and director of the Centre on AI Technology for Humankind. He is a Thinkers50 Radar thought leader, included in the top 2% of scientists worldwide, and a Top 30 Global Management Speaker. He is the author of The AI-Savvy Leader (Harvard Business Review Press, 2024). His website is www
TOJIN T. EAPEN是密苏里大学 Robert J. Trulaske Sr. 商学院的助理教授,也是 Innomantra 的首席顾问。
TOJIN T. EAPEN is an assistant professor at the Robert J. Trulaske Sr. College of Business at the University of Missouri and a principal consultant at Innomantra.
DAVID C. EDELMAN是哈佛商学院的执行顾问和高级讲师。
DAVID C. EDELMAN is an executive adviser and a senior lecturer at Harvard Business School.
BEN FALK是安永首席技术办公室主任,帮助领导安永新兴技术实验室。他拥有金融和技术背景,在加入一家利用自然语言技术的人工智能金融科技初创公司之前,曾作为经济学家和策略师在大型对冲基金工作了十年。在加入安永之前,他创办了一家个人数据代理初创公司,帮助消费者管理和执行他们的个人数据权利。
BEN FALK is a director in EY’s Chief Technology Office, helping lead EY’s Emerging Technology Lab. He has a background in finance and technology, having spent a decade working for large hedge funds as an economist and strategist before joining an AI fintech startup leveraging natural language techniques. Before joining EY, he launched a personal data agency startup, helping consumers manage and enforce their personal data rights.
丹尼尔·J·芬肯斯塔特 (DANIEL J. FinkENSTADT)是加利福尼亚州蒙特雷海军研究生院国防管理学助理教授,也是咨询公司 Wolf Stake Consulting 的负责人。
DANIEL J. FINKENSTADT is an assistant professor of defense management at the Naval Postgraduate School in Monterey, California, and a principal of the advisory firm Wolf Stake Consulting.
JOSH FOLK是基于云的创新软件平台 IdeaScale 的联合创始人兼企业解决方案总裁。
JOSH FOLK is a cofounder and the president of enterprise solutions at IdeaScale, a cloud-based innovation-software platform.
DINKAR JAIN是加州大学洛杉矶分校安德森管理学院和圣克拉拉大学的客座教授。他是 Meta 的前广告人工智能主管和产品管理总监。
DINKAR JAIN is a visiting professor at the University of California Anderson School of Management, Los Angeles, and Santa Clara University. He is Meta’s former head of artificial intelligence for ads and director of product management.
SHEEN S. LEVINE是德克萨斯大学达拉斯分校和纽约哥伦比亚大学的助理教授,研究和教授人们的行为方式以及他们如何影响他人、组织和市场。他感谢人工智能博士研究员 Apollinaria Nemkova 的建议。
SHEEN S. LEVINE is an assistant professor at the University of Texas, Dallas, and Columbia University, New York, studying and teaching how people behave and how they impact others, organizations, and markets. He is thankful for the advice of Apollinaria Nemkova, AI PhD researcher.
SALLY E. LORIMER是全球专业服务公司 ZS 的负责人。
SALLY E. LORIMER is a principal at ZS, a global professional services firm.
JULIANA NEELBAUER是福克斯·罗斯柴尔德 (Fox Rothschild) 公司、知识产权、新兴市场以及娱乐和体育法律团队的合伙人。她是马里兰大学和乔治城大学的讲师,讲授证券法、谈判、数字资产和商法。
JULIANA NEELBAUER is a partner at Fox Rothschild in the corporate, intellectual property, emerging markets, and entertainment and sports law groups. She is a lecturer at the University of Maryland and Georgetown University regarding securities law, negotiations, digital assets, and business law.
TSEDAL NEELEY是哈佛商学院工商管理内勒·菲茨休教授兼教职与研究部高级副院长。她是《数字思维:在数据、算法和人工智能时代蓬勃发展的真正需要》一书的合著者,也是《远程工作革命:从任何地方取得成功》一书的作者。
TSEDAL NEELEY is the Naylor Fitzhugh Professor of Business Administration and senior associate dean of faculty and research at Harvard Business School. She is the coauthor of The Digital Mindset: What It Really Takes to Thrive in the Age of Data, Algorithms, and AI and the author of Remote Work Revolution: Succeeding from Anywhere.
MARC RAMOS是学习和人才管理技术领域领导者 Cornerstone 的首席学习官。 Marc 作为学习领导者的职业生涯延续了 25 年在 Google、Microsoft、Accenture 和 Oracle 工作的经验。
MARC RAMOS is the chief learning officer of Cornerstone, a leader in learning and talent management technologies. Marc’s career as a learning leader extends 25 years of experience with Google, Microsoft, Accenture, and Oracle.
YOAV SCHLESINGER是 Salesforce 的道德人工智能实践架构师,帮助公司嵌入和实例化道德产品实践,以最大限度地提高人工智能的社会效益。在加入 Salesforce 之前,Yoav 是 Omidyar Network 技术与社会解决方案实验室的创始成员,在那里他发起了“负责任的计算机科学挑战赛”,并帮助开发了 EthicalOS(一款面向产品经理的风险缓解工具包)。
YOAV SCHLESINGER is an architect of ethical AI practice at Salesforce, helping the company embed and instantiate ethical product practices to maximize the societal benefits of AI. Prior to coming to Salesforce, Yoav was a founding member of the Tech and Society Solutions Lab at Omidyar Network, where he launched the Responsible Computer Science Challenge and helped develop EthicalOS, a risk mitigation tool kit for product managers.
大卫·A·施韦德尔 (DAVID A. SCHWEIDEL)是埃默里大学戈伊苏埃塔商学院丽贝卡·切尼·麦克格里维捐赠主席兼营销学教授。他的研究重点关于消费者与技术的互动以及这如何影响营销实践。
DAVID A. SCHWEIDEL is the Rebecca Cheney McGreevy Endowed Chair and Professor of Marketing at Emory University’s Goizueta Business School. His research focuses on consumer interactions with technology and how this shapes marketing practice.
ARUN SHASTRI领导全球专业服务公司 ZS 的人工智能实践。
ARUN SHASTRI leads the artificial intelligence practice at ZS, a global professional services firm.
ERIC SIEGEL是一位领先的顾问和前哥伦比亚大学教授,帮助公司部署机器学习。他是长期举办的“机器学习周”系列会议的创始人、经常发表主题演讲的人以及《机器学习时报》的执行主编。 Eric 撰写了《AI Playbook:掌握机器学习部署的罕见艺术》一书和畅销书《预测分析:预测谁会点击、购买、撒谎或死亡的力量》,该书已在数百所大学的课程中使用。他在哥伦比亚大学担任教授时获得了杰出教师奖,在那里教授机器学习和人工智能的研究生课程。后来,他担任弗吉尼亚大学达顿商学院商学院教授。埃里克还发表有关分析和社会正义的专栏文章。
ERIC SIEGEL is a leading consultant and former Columbia University professor who helps companies deploy machine learning. He is the founder of the long-running Machine Learning Week conference series, a frequent keynote speaker, and executive editor of the Machine Learning Times. Eric authored the book The AI Playbook: Mastering the Rare Art of Machine Learning Deployment and the bestselling Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, which has been used in courses at hundreds of universities. He won the Distinguished Faculty Award when he was a professor at Columbia University, where he taught the graduate courses in machine learning and AI. Later, he served as a business school professor at University of Virginia Darden School of Business. Eric also publishes op-eds on analytics and social justice.
PRABHAKANT SINHA是全球专业服务公司 ZS 的联合创始人。他还在印度商学院教授销售主管课程。
PRABHAKANT SINHA is a cofounder of ZS, a global professional services firm. He also teaches sales executives at the Indian School of Business.
LOKESH VENKATASWAMY是 Innomantra 的首席执行官兼董事总经理,Innomantra 是一家位于印度班加罗尔的创新和知识产权咨询公司。
LOKESH VENKATASWAMY is the CEO and managing director of Innomantra, an innovation and intellectual property consulting firm in Bengaluru, India.
MARC ZAO-SANDERS是filtered.com的首席执行官兼联合创始人 ,该公司开发算法技术来理解企业技能和学习内容。
MARC ZAO-SANDERS is the CEO and cofounder of filtered.com, which develops algorithmic technology to make sense of corporate skills and learning content.